ThinLinc and LIDAR data visualization Q&A - Graphic-Visualization-Computing Lab IIIT Bangalore

In late 2014, during the International Conference on Big Data Analytics,
the researchers Beena Kumari, Avijit Ashe (@ashe15avijit), and Jaya Sreevalsan-Nair (@jnair)
published a conference paper titled "Remote Interactive Visualization of
Parallel Implementation of Structural Feature Extraction of
Three-dimensional Lidar Point Cloud"

At that time, Beena, Avijit, and Jaya worked together in the
Graphic-Visualization-Computing Lab at the International Institute of
Information Technology Bangalore
, located in India.

One interesting fact about their research is the use of ThinLinc as the
software responsible for allowing the interactive visualization of the
LIDAR data located in the server. As the developers of ThinLinc, we are
always happy to learn, understand, and share information with our
community on how ThinLinc helps users and customers worldwide to solve
their Linux Remote Desktop needs. We believe that doing this helps the
onboarding of new users and customers and supports the learning interest
of the current “ThinLincers”.

We invited the researchers to join the community to participate in a Questions &
Answers post, so all of us can learn and discuss the topic together.
Below we share some initial questions elaborated by our colleagues. We
invite all community members to engage in the discussion and ask
questions since this is an open space for learning. VarsĂĄgod!

Questions:

  1. Please tell us about yourselves in general.

  2. What is the focus of the International Institute of Information
    Technology Bangalore (IIIT-B) and the Graphic-Visualization-Computing Lab?

  3. What is LIDAR in general, and what are the primary uses of this
    technology?

  4. How does society benefit from advancements of LIDAR?

  5. Please tell us a bit about the LIDAR project background.

  6. What challenges with the project were you looking to solve with
    ThinLinc?

  7. In your 2014 conference paper, you mention that “TigerVNC is the
    most suitable protocol” for “real-time image manipulation and
    rendering.” What makes TigerVNC especially well-suited to this task, and
    how does it compare to other protocol implementations?

  8. What are the benefits of visualizing LIDAR data remotely?

  9. Are there other organizations that would benefit from the suggested
    setup of ThinLinc for LIDAR data visualization? If so, which kind of
    organization?

  10. Which applications users accessed on the server?

  11. What are the research topics you or your group are currently addressing?

  12. Open space for your comments or recommendations in general.

–
Read the conference paper

Have a look at the conference paper presentation

  1. Please tell us about yourselves in general.

We are at a research lab, Graphics-Visualization-Computing Lab (GVCL) at the International Institute of Information Technology Bangalore (IIIT-B). Jaya Sreevalsan-Nair is a visual analytic researcher, with a focus on data visualization and computational methods. Jaya is a faculty member at the university, IIIT-B, and Avijit Ashe used to be a research assistant at the lab, and now is pursuing his MS by Research at the International Institute of Information Technology Hyderabad (IIIT-H). Jaya has a Ph.D. in Computer Science from the University of California, Davis, and her thesis focused on visualization methods using global-local duality in different applications.

1 Like
  1. What is the focus of the International Institute of Information Technology Bangalore (IIIT-B) and the Graphic-Visualization-Computing Lab?

IIITB is a technology university that offers post-graduate degree programs in Information Technology, Computer Science, and Electronics & Communications Engineering. It is located in Bangalore and is an institute that was established in 1999. The focus of IIIT-B has multidisciplinary imbibing various disciplines under the umbrella of Information Technology, including Computer Science, Data Science, IT and Society (Humanities), Networking-Communication-&-Signal-Processing, Software Engineering, and VLSI. IIITB offers both course-based degree programs (including senior year thesis), and research degree programs (MS by Research and Ph.D.).

GVCL is a research lab founded and led by Jaya Sreevalsan-Nair in 2012. Her research interests stem from visualization methods, data transformations for knowledge discovery, and computational methods. The lab focuses on applications in geospatial analysis (LiDAR point clouds, SAR images, etc.), multiplex networks (brain networks, gene-gene interaction networks, etc.), population geography (national health surveys), and meta-visualization for automated interpretation (bar, line, pie, and scatter charts).

1 Like
  1. What is LIDAR in general, and what are the primary uses of this technology?

LiDAR (Light Detection and Ranging) is a remote-sensing technology that uses the reflection of light to determine the distance of an object from the sensor. There are different types of LiDAR – airborne and terrestrial. Airborne LiDAR scans capture the top-view of large-scale geographical regions. Terrestrial LiDAR scans capture different views of geographical regions/objects in a scene (e.g. building facade). Vehicle LiDAR has been used in an automotive driving system to capture 360-degree views around the car. The scans are usually the point clouds, i.e. a set of 3-dimensional positional coordinates in the local reference frame of the sensor, along with light intensity value. The primary uses are for remote sensing large regions, with high accuracy, and mostly, with dense point sampling.

  1. How does society benefit from advancements of LIDAR?

Society benefits from advancements of LiDAR just as is done with any remote-sensing technology (Radar, Optical imaging, Satellite imaging, etc.) It helps in data acquisition. This data captures time-stamped information in geographical regions, including urban regions. The data is especially useful to study several temporal changes, including environmental changes, urbanization, etc.

  1. Please tell us a bit about the LIDAR project background.

The LiDAR project started with a grant from the Natural Resources Data Management System (NRDMS) program, now known as the National Geospatial Program (NGP). NRDMS is implemented by the Department of Science and Technology, Government of India. Thus, it was a sponsored project with the focus of helping scientists working with LiDAR point clouds, to be able to collaborate in understanding the data. The proposed solution for enabling collaboration was a visualization tool. The tool was to be developed using OpenGL (Open Graphics Library), to enable three-dimensional visualization of LiDAR point clouds, and its analytics.

  1. What challenges with the project were you looking to solve with ThinLinc?

The idea then posed was that the scientists in government labs work under high national security constraints. Thus, any system that can be deployed on a LAN (local area network) was desirable. Hence, the initial proposal which Jaya submitted on visualization evolved owing to the recommendation of the expert committee that evaluated the proposal, to support intra-lab collaboration for scientists. From a preliminary literature survey of remote desktop software needed for the OpenGL application, TigerVNC was identified as a solution, given its support to the virtualGL project. Avijit was subsequently hired to work on this project, and he implemented the solution through ThinLinc to deploy an OpenGL application on a LAN.

  1. In your 2014 conference paper, you mention that “TigerVNC is the most suitable protocol” for “real-time image manipulation and rendering.” What makes TigerVNC especially well-suited to this task, and
    how does it compare to other protocol implementations?

Yes, TigerVNC was indeed good for 3d-visualization which also entailed real-time user interaction. As mentioned, TigerVNC was already popular for deploying OpenGL applications. Hence, we used the evidence in prior work for the choice of TigerVNC. It indeed is a good choice as our software included both high-performance computing and 3D visualization elements. We implemented them using CUDA and OpenGL, respectively. Since we were using CUDA and it is unlikely for a government lab to upgrade all clients with CUDA-GPUs, our design had to account for a (single) server with the thin clients in the LAN. Altogether, TigerVNC fits the bill for all the requirements as opposed to other protocols.

  1. What are the benefits of visualizing LIDAR data remotely?

This was an experimental project which stemmed from a budding interest in the visualization community for collaborative analysis. The idea was that scientists in a LAN could use remote visualizations in a server-client model, with a thin client. Thus, for economical hardware requirements, we needed a remote visualization application, by design. Jaya joined IIITB after a short stint at the Texas Advanced Computing Center (TACC), the University of Texas at Austin. There was a strong focus for remote and collaborative visualization at TACC, where systems like Maverick, Stampede, and Lonestar have been deployed. That has also been an inspiration for using remove visualizations for LiDAR point clouds.

  1. Are there other organizations that would benefit from the suggested setup of ThinLinc for LIDAR data visualization? If so, which kind of organization?

We have not followed up on the actual deployment of our developed solution at any of the government/national labs in India. But we believe any organization which works under a high level of security with requirements of a server-client model with thin clients for remote visualization can always use our proposed setup of ThinLinc.

  1. Which applications users accessed on the server?

Unfortunately, we didn’t follow up on the actual implementation on a live server. We ended up submitting our software to the funding agency, but haven’t had a chance to follow up on live deployment.

  1. What are the research topics you or your group are currently addressing?

The project we worked on during 2012-14, which included the use of Thinlinc, spawned serious research work in our lab on LiDAR point cloud analysis. We have followed it up with work on semantic classification, geometric reconstruction, and big-data framework development for airborne LiDAR point clouds. We are now in the process of extending similar work for vehicle LiDAR. Our group now does a lot of work in geospatial data analysis and visualization, using remote-sensing/earth observations data. We also work on multiplex network data analysis across several applications (brain, gene, population migration, etc.). We have also recently started working on automated interpretation of visualizations, starting with simple charts.

  1. Open space for your comments or recommendations in general.

We are happy to connect with the ThinLinc community. We are benefactors of the technology developed at Cendio, and we are honored to share our journey here. Hoping this discussion will also help others to adopt ThinLinc for their requirements, if similar to the ones we had. Happy ThinLinc’ing! (As an aside, our sincere apologies for the delayed response. We waited for the weekend to write the detailed response.)

2 Likes

@jnair thank you for the very interesting project overview, appreciate it.

You mention LiDAR applications for vehicles. How do vehicles manage to process this data in real time? Is onboard hardware sufficient for this, or do concepts like edge computing fit in here somehow?

1 Like

@aaron thank you for the pertinent query. Yes, to my knowledge, a lot of the computing occurs in the cloud, some bit on the fog, and very little on the edge. Shifting much of the computation to the edge is the goal. We work heavily on the classification of LiDAR point clouds. For vehicle LiDAR, unlike the airborne one, there is more uncertainty owing to partial scans, especially of moving objects. It is also dependent on the scale. For airborne, the spatial scale at which observations are done is such that the distance between the sensor and the objects is able to get more static snapshots than vehicle LiDAR, which is done at a very close range. What I mean here is that a moving car captured using airborne LiDAR will give a full top view (without even motion blur) unlike the vehicle LiDAR by another car on the road, which will be a partial side-scan. Hence, the (classifier) models used in airborne LiDAR need not “guess” and “estimate” as much as in the case of the vehicle LiDAR. What that means is that the vehicle LiDAR classification has to use CNN’s and deep learning models, and with more layers. Altogether it’s not possible to do the training on the edge. The training has to be done on the cloud, updated frequently with streaming data, and the edge can do the testing.

2 Likes