For our first foray into LiDAR, we chose to document the Reflections on Grief and Child Loss exhibit at President Lincoln’s Cottage. We used LiDAR to produce a high resolution point cloud model of the finished exhibit space to share the work with an online audience in a pandemic world.
LiDAR technology calculates the distance from an object and determines its shape and texture by using a light in the form of a laser. The light bounces off the object, which is detected by its sensors and processed into visual data.
The name LiDAR is short for “light detection and ranging” and is often used in navigations, autonomous driving, and meteorology. It’s now available on everyday consumer electronics. Andrew Kastner, a graduate student at the George Washington University Exhibit Design program helped with this process using the LiDAR scanner on the iPhone 12 and the SiteScape app to capture the environment and a variety of softwares to create the 3D model.
Below is our conversation with Andrew on the overall process.
Which mapping technique did you use to document this exhibit and which was most successful?
There are two main types of LiDAR scanning applications for the iPhone. The first utilizes mesh mapping and captures spaces in a photo-realistic sense. Below are screenshots of a few mesh scans of the space from an application called 3D Scanner App.
This is where the other type of scanning comes in.
This type of scan creates a spatialized series of pixels called a point cloud. The point cloud scans handle complex geometry much better but don’t create the same level detail as the photo mapped textures in the mesh mapping technique. In this instance the ethereal quality of the point cloud also served as a fitting stylization to meet the mood and themes of the exhibit. The app used for this scan is called SiteScape.
How does SiteScape work and how accurate was the scan?
SiteScape gives you a few different scanning options. Since the scanning took place in a larger area I chose max area and both medium point density and size. You are limited by a single scan’s file size and a small blue progress bar below the recording button indicates how much more you can scan in a single capture.
In the space I captured each wall in discrete sections while also making sure there was some overlap between each section. The app also creates relationships between multiple scans if you keep the app open between scans. This makes stitching multiple scans into one scene a breeze.
Can you tell us about the assembly and editing process of the captured points?
Once the space was sufficiently scanned I exported each scan as a .ply file. I then imported each file into a program named CloudCompare. The point cloud files automatically registered themselves with each other once imported into this program.
From here, I can now save the discrete scans into one larger point cloud file in order to bring it into Rhino. Also, I am going to export this point cloud file as an .e57 file. This file type is compatible with the Grasshopper function I’ve chosen to use in order to subsample the point cloud.
To clean up the model I took the point cloud into Rhino. The operations I took on in Rhino can be accomplished in CloudCompare, but I was just more comfortable with using Rhino for this processing.
Using multiple views and the sub object selection method in Rhino I am able to clean up the stray points and overall form of the point cloud. After cleaning up strays, I opted to remove the ceiling from the cloud as well.
After cleanup, I went ahead and exported the cleaned up version of the cloud from Rhino again as an .ply because Rhino does not support .e57 exports at the moment. And since the subsampling function in Grasshopper doesn’t accept .ply files, you need to run this .ply through CloudCompare one more time to export it as an .e57.
Now we can load the Grasshopper definition. This definition uses a plug-in called Volvox specifically designed for manipulating point clouds. Volvox can load the .e57 file and manipulate it in many ways. For our purposes I used the random subsampling feature. This is a method of removing points from the cloud in order to reduce file size while maintaining the integrity of the point cloud’s form.
After the point reduction method was applied, the result was then baked out of Grasshopper, and exported to .ply for uploading to Sketchfab. Sketchfab is a viewer for 3D models and has its own 3D visualization settings to edit materials, lights, and annotations. We used annotations in Sketchfab to help guide viewers through the digitized space in the sequential order.
I’m excited to introduce LiDAR and photogrammetry 3D scanning processes at Howard+Revis because the ability to map out the project site and work directly in a georeferenced 3D reconstructed model is extremely helpful in understanding the site. It also allows the designers to work in augmented reality, layering in computer generated images and models to test and visualize our ideas. Our most recent 3D scanning work on another project increases the scale and scope of our 3D photogrammetry practice. We used drone captured aerial photos to generate a photorealistic 3D model with software called Agisoft Metashape. Stay tuned for more updates on 3D scanning work in the future!
The final 3D scanned point cloud model of this exhibit can be experienced here:
Comments