Breaking Down 3D Scanning Into Threes and Understanding Each One In Relation To The Others
It is increasingly important for engineers and scientists to choose the right technique for capturing “reality data” – the physical information about a structure, object, area, or living being. While you can hire a 3D scanning service to do the job for you, this post will focus on hardware and software that you can use to do it yourself.
But what does “3D scanning” mean, anyway, and what options are available to you?
Types of 3D Scanning
3D scanning is the act of mapping an object, structure, or area, and describing it in the form of x, y, and z coordinates – a format known as a “point cloud”. Some of the more unusual forms of 3D scanning include computed tomography scans and ground-penetrating radar, which have fascinating uses in fields like archeology and medicine, but this post will cover three specific ones: digital photogrammetry, white-light structured scanning, and LIDAR – a term which, depending on who you ask, is either a combination of “LIght and RaDAR” or an acronym for “Laser Imaging Detection And Ranging”.
Since it’s unlikely that you have an unlimited budget set aside for 3D scanning, this article will explore a few different technologies and compare both their performance and their costs – you may not have the budget to capture reality data down to the nanometer, but that doesn’t mean there’s no tool you can use to achieve your goal.
1. LIDAR: Sometimes used as an acronym, LIDAR was originally a combination of the words light and radar, and was first used to accurately model clouds following the invention of the laser in the 1960s.
Just as radar uses radio waves to measure the distance between the radio tower and an object, LIDAR uses lasers to create points between the laser and an object, structure or landscape. LIDAR proved itself particularly useful in surveying land and creating accurate topological maps, replacing photogrammetry due to LIDAR’s accuracy in sifting through objects that would obscure elevation and other details.
Digital photogrammetry depends on triangulation. By taking multiple photographs, you create different lines of sight – sometimes called “rays”. Just as your brain triangulates the images from your eyes to create an accurate model of distance, the photographs are mathematically intersected to create accurate three-dimensional coordinates. With photogrammetry, taking great photographs is imperative to creating good models. This means paying attention to exposure, field-of-view (FoV), and focus.
3. Infrared or Structured Light 3D Scanning:
Structured light 3D scanning devices use projected light and a camera system to shoot light onto the surface of an object, creating a “line of light”. Distortions in the line of light are then used to recreate the object’s surface geometry.
Of course, there isn’t just one projection of light happening -there are many stripes of light, producing distortions based on the object’s surface structure. Since the stripes of light are parallel to each other, this can be used to collect 3D coordinates from the object’s surface.
3D Scanning |
Infrared |
Photogrammetry |
LIDAR |
+ |
Inexpensive |
Balanced between cost and resolution of scans |
Highest quality |
– |
Not as high quality |
For a higher resolution, you need a more expensive camera |
Most expensive option by far, also requires specialized knowledge and software and a relatively high-end workstation. |
A Word About the Uses of 3D Scanning
Don’t let the connotations of the word “object” restrain your thinking when it comes to what can be scanned. Even a living thing can be digitized and rendered inanimate when it comes to 3D scanning.
I write this only because I usually conflate the definition of “object” with “inanimate”, but of course, 3D scanning knows no such distinction. An abundance of cameras and the decreasing costs of laser technology are allowing 3D scanning to find new uses in diverse sets of hands.
The Hydrous Project, co-founded by Sly Lee, invented pretty much the best job in the whole world: traveling to gorgeous tropical islands and scuba diving down to visit giant coral reefs in order to create 3D models of them using digital photogrammetry.
Christian Stallings, R&D manager at McKim & Creed, explains how the real-world accuracy (and cost) of UAVs compares to Traditional Surveying and LiDAR. (Video Courtesy of 3D Robotics).
So that’s 3D scanning in a nutshell. Hopefully, whether you’re a curious novice or an inquisitive expert, you now have the information you need to take your next step. But if you still have a few lingering questions, read on, and we’ll try to clear some of them up.
Which Kind of Scanner is Best For Certain Kinds of Objects?
When it comes to picking the right scanner for your object, the most important factor is size: any given scanner has its own unique field of view, which will determine how much surface area, and how close you need to get for accurate results.
Are Handheld Laser Scanners Always Best for Small Objects That Require Very Detailed 3D Scans?
If you need to measure small objects – for example, screws, transistors, or dental molds – to a high-level of accuracy, you’ll probably be better off with a handheld LIDAR scanner.Many laser scanners have a field of view ranging from approximately four inches to three feet. For larger objects, you can take multiple scans, then stitch them together.
Of course, not everyone can afford their own personal laser scanner, and if you don’t know how to use it, you also have to factor in the cost of training. But while LIDAR may be your best option, that doesn’t mean you don’t have others – photogrammetry and a great camera can be an acceptable substitute even for small objects.
To reiterate, you can get high-quality 3D models of smaller objects with photogrammetry software and a great camera. More on that later.
How Important Are Good Scan Results?
It depends on the type of job you’re doing. Ask yourself – what type of accuracy, resolution, and speed am I looking for?
It goes without saying that no amount of money saved is going to be worth it if your model doesn’t look like, well, your model. If you keep the relative accuracy of different techniques in mind, then you can at least temper your expectations – but that doesn’t mean you should settle for an inaccurate scan. Still, no measurement device is perfect, and not every application requires a highly accurate 3D scan.
The more parts you need to scan, the more important speed becomes. This is even more true if the object is in motion during the scan – it can be difficult to accurately measure something that won’t sit still. In this case, using a fast method is even more important.
Processing speed, software, and workstation quality are all key factors. It is very important to understand the capability of your scanner’s hardware and software when planning your scanning project or workflow. Some scanning engines do more post-processing than others. But as a rule of thumb, the more your hardware does in terms of processing your 3D model, the better.
How Does One Setup a Scanning Workflow?
First, consider the user’s experience. Some scanners are easy to use, and if you’re scanning something like the inside of a house for a real estate listing, you can use a plug-and-play infrared scanning technology like the Matterport 3D Camera, which just requires an app and iPad.
On the other hand, if you need a more sophisticated scanner, and don’t have any experienced users, you’ll need to account for training time when planning a project or workflow. The wider the knowledge gap grows, the more training time you’ll need. This is especially true for commercial LIDAR.
Why Are Commercial 3D Scanners Preferred for Major Industrial Applications?
The simple answer is that they provide better scans in a wider variety of environments. They may be prohibitively expensive for the average individual, but they can capture voluminous point clouds at a rate of over one million points per second!
Jacob Hockett, CEO of Minds Mechanical grew up in the aerospace industry and is widely considered an expert on metrology for aerospace, working regularly with Lockheed Martin, GE, as well as tech giants like Google on some of their moonshot projects. He was responsible for scanning Kapton on the James Webb Telescope for NASA.
“Kapton is a material that’s going on a spaceship, and is miserable son of a gun to measure, mainly because you can’t touch it. It’s infrared absorbent, which makes it difficult for light radar systems to measure it. Technically, only one flight-layer has been measured so far, and the rest will be measured next year. It’s one of the hardest materials I’ve ever had to measure because its main purpose is to absorb the infrared spectrum from the sun. If sunlight bounces off of this material it messes up the field of the optical sensors, the data will be screwed up. The laser radar system was more like a laser tracker. It takes what’s called a patch measurement, a surface vector intersection. It takes a little patch around the vector, sets a plane through it to make sure nothing is going in or out at a specific range, and you want to come in at around 25 microns.”
I had a chance to ask Hockett about the state of 3D scanning in aerospace and metrology, and he said the following:
“3D scanning is great, and it’s come a long way. Photogrammetry has always been an industry standard. I treat it slightly different than 3D scanning. But your LIDAR systems, and your white-light/blue-light scanning systems have changed quite a bit. I’m seeing them more on the automation side, than anything else as it relates to quality. The handheld scanners and white-light systems are still not quite as accurate as your high-end systems, which is why they are still blackballed in the aerospace industry.”
I also had the chance to speak with Stuart Woods, Vice President, Geospatial Division at Leica Geosystems. “From my standpoint, I think we’re on the cusp of really leveraging regularly collected data for multiple applications. I use this sort of funny analogy. It won’t be long before my son can order his own 3D world for Call Of Duty based on our neighborhood. For public safety and security, training and so on, the concept of using real-world data sets beyond surveying is what I see as the next big trend, and we are really focused on this. Working in 3D, and working with a point-cloud is becoming democratized, which is very exciting.”
Milos Lukac, a 3D Generalist for Reality Capture told me that the company is hyper-focused on all photogrammetry and laser scanning markets, and that their team is dedicated to the task of improving the quality and speed at which 3D models are reconstructed using Reality Capture. He said that he sees a bright future for 3D scanning and digital photogrammetry, and they have a lot of ideas they can’t discuss right now that will be implemented in the near future.
High-end laser scanners are also more efficient at processing the scan data, freeing up post-processing software for more customization and specialization. You can generally view vendor samples of these high-end scans upon request.
How Important Is 3D Scanning to Reverse Engineering?
I recently visited the mechanical engineering department at Virginia Commonwealth University, near my apartment in Richmond, and while I was waiting for the department head, I wandered into the digital fabrication lab and saw two students leaning over display screens, running Creaform handheld scanner software and displaying what looked like a newly scanned brake pad. I asked them what they were working on, and they explained that they had an assignment to reverse engineer and improve the design of a brake pad as a mid-term assignment.
Recreating design intent with VXmodel scan-to-CAD software module and Autodesk Inventor to have the native CAD model of an automotive part casing. (Video courtesy of Creaform).
3D scanning offers students like these the opportunity to innovate, using handheld LIDAR devices like the HandySCAN 3D from Creaform.
Creaform provides an integrated solution, including both a handheld laser scanner and software that allows students to manipulate and change their scan data. It can be used to scan an object that’s already been designed, like a brake pad, and make improvements to it on the fly in a virtual environment. Needless to say, 3D scanning tools like Creaform are absolutely crucial to reverse engineering.
Now that that’s cleared up, let’s examine some specific examples of 3D scanning technology in detail.
Free Digital Photogrammetry Options for Mobile Devices
If you’re really on a tight budget, there are plenty of apps that let you capture reality data using hardware you probably already own. Thanks to the ubiquity and miniaturization of camera and sensing technology, the average smartphone user is in possession of a very powerful compute, containing amazing doodads like gyroscopes (for detecting orientation), magnetometers (for detecting magnetic fields), proximity sensors (an infrared LED and light detector), barometers (to measure atmospheric pressure), hygrometers (to measure humidity), thermometers (to measure heat), heart rate monitor sensors (to take your pulse), pedometers (to measure distance traveled), fingerprint sensors, and an increasing amount of haptic (touch-based) technology. With all these tools, you can get surprisingly good 3D scan results using nothing but an iPhone.
Selection of Free 3D Scanning Applications for Mobile Devices
Name |
Technology Overview |
Status |
This photogrammetry app employs a clever use of burst mode on the iPhone, capturing up to 80 photos as you move your iPhone’s camera around an object.
|
Available for iPhone only |
|
Unlike the other two, this app uses your mobile device’s own processing power to render models, instead of sending photos to the cloud for stitching. This means that you need some powerful hardware to run it, but at least you don’t have to wait or deal with cloud-side bugs.
|
Available for Android (Still in Beta) |
|
This app is good for capturing faces, which may assist in some architectural quick captures or sketches, but doesn’t seem to have any real engineering purposes. At least, not any immediately obvious ones.
|
Available for iOS |
3D scanning hardware for mobile devices (some capital required, USD 79 – 500):
There are other photogrammetry applications for mobile devices that require some external hardware, such as iSense from 3D systems, the Structure Sensor from Occipital and RealSense from Intel. Scandy Pro is also included in this category.
Of course, you could always build a scanner from spare parts, but I’ll address that in a later post. For now, I’ll just assume that you’re interested in trying out 3D scanning, and we’ll cover your options from least expensive to most.
Selection of Handheld Infrared Scanners
Name |
Technology Overview |
Cost |
The Kinect uses a Red-Green-Blue (RGB) camera with a depth sensor and an infrared projector with a monochrome Complimentary Metal-Oxide Semiconductor sensor, which sees the environment not as a flat image, but as dots arranged in a 3D environment. This technology was developed by PrimeSense, an Israeli company, which was bought by Apple for about USD 380,000,000.
|
$75-100 |
|
See Occipital Structure Sensor, since this is basically a re-branded version of it. Support for the iPhone 6 version has been discontinued by Occipital. Don’t buy this version like I did.
|
$79 |
|
Powered by Intel RealSense Camera. See below.
|
$139 |
|
The RealSense camera is a combination of a conventional camera lens, an infrared lens, and an infrared laser projector lens that all work together with separate image processing hardware to effectively measure the distance between pixels on any objects that appear in multiple fields of view.
|
$149 |
|
Uses infrared structured light projectors and infrared Light-Emitting Diode (LED) projectors, and was created because a small group of developers and engineers wanted to create a smaller, untethered version of the Microsoft Kinect.
|
$379 |
Photogrammetry Software: (Free, Free Trial, Subscription or License):
When it comes to photogrammetry, software like Autodesk’s ReCap and ReMake (formerly Memento/Catch123) are among the best options available for free though there is also open-source software like Skanect, which captures reality data with a Microsoft Kinect. The first full-body scan I ever had was done with a Microsoft Kinect with Skanect mounted on a lazy susan at the Museum of Architecture and Design) in Manhattan, which was taken by 3D printing service Shapeways in late 2014.
When experimenting with photogrammetry, always keep in mind that the quality of the photo counts, and that means the quality of the camera counts, too. Of course, the higher the quality of your photos, the more storage space you’re going to need, so if you’re trying to capture something quickly as a reference model, and quality is not key, then you can always use a lower-quality camera and stitching software like Autodesk’s ReCap as a substitute.
Name |
Technology Overview |
Cost |
Uses images to create 3D models.
|
Free |
|
Uses images and laser scans to create 3D models
|
Try for free USD 300 per year |
|
Uses images and laser scans to create 3D models
|
Try for free, USD 300 per year |
|
Uses images and laser scans to create 3D models
|
Try for free, USD 105 per 3 mos., USD 7940 for 1-year license |
Expensive – Laser Scanners (USD 2000 – 100,000)
LIDAR is the most accurate form of 3D scanning, but also the most expensive, and it requires the most knowledge and experience. But if cost is no object, then laser scanning is the way to go.
If you’re new to lasers, remember that they are lasers! There are some safety classifications that let you know just how much light power you’re dealing with – Class 4 lasers are the most dangerous, but as I’m sure those of you experienced (but hopefully didn’t deliver) a laser pointer to the eyes, even a low power laser can do some serious damage.
How to Classify Lasers:
Class 1 |
|
Safe under normal conditions, if it is built to shield the eyes from a harmful level of radiation.
|
Class 1M |
|
Safe for all conditions, except for beam-magnifying optics like telescopes or microscopes, at which point the beam can become as dangerous as a Class 2 or 2M beam. This class includes divergent and large-diameter beams.
|
Class 2 |
|
This classification – which includes those annoying laser pointers – consists of beams limited to 1 milliwatt. They’re normally safe to look into, since your blink reflex will keep you from looking into the beam for longer than a quarter second, but if you insist on staring into the beam without blinking, you can expect to damage your eyes.
|
Class 2M |
|
Similarly, your blink reflex should protect you from a Class 2M laser – most of the time. These lasers require the same safety precautions as a Class 1M.
|
Class 3R |
|
This laser is considered safe if restricted beam viewing apparatuses are in place. Limited to 5 mW.
|
Class 3B |
|
Blinking won’t protect you if you shine a Class 3B beam directly into your eyes, but once it’s reflected by a surface other than a mirror or polished glass or metal, you should be safe.Protective eyewear is a good idea, and is generally a prerequisite for operating a 3B laser beam. The hardware must have a key switch and safety interlock built-in.
|
Class 4 |
|
These lasers can burn and cut through the skin, ignite combustible materials, and blind you just by looking at them without eye protection. Like Class 3B Lasers, Class 4 Lasers require a key switch and safety interlock.
|
This is just a short selection of commercial lasers, and they do get very expensive. There just isn’t enough room in this post to go into detail about every existing option, so here is a pretty random selection of what’s available, how much it costs, and what analogous software you can use with each one.
Name |
Technology Overview |
Cost |
Laser Class—1M (eye-safe) lasers to create parallel scans Software—ScanStudio, proprietary software creates textured mesh models that you export in STL, OBJ, VRML, and U3D file formats. You can also buy RapidWorks (USD 2995), which uses technology derived from Geomagic to change 3D scans into parametric solid models with SOLIDWORKS Feature Tree output.
|
USD 2995, plus optional USD 2995 for RapidWorks software |
|
Laser Class—1 (safe) The Trimble TX8 provides a 360° x 317° FoV and captures scans from 120 m with no need to reduce speed. Plus, it’s available with an optional upgrade extending the range to an impressive 340 m.
|
USD 42,000 |
|
Laser Class—2M (eye-safe) Scanning area 225 x 250 mm (8.8 x 9.8 in.) 275 x 250 mm (10.8 x 9.8 in.) Stand-off distance 300 mm (11.8 in.) Depth of field 250 mm (9.8 in.) Part size range (recommended) 0.1 – 4 m (0.3 – 13 ft.) Software—VXelements Output formats include .dae, .fbx, .ma, .obj, .ply, .stl, .txt, .wrl, .x3d, .x3dz, and .zpr Compatible software includes 3D Systems (Geomagic® Solutions), InnovMetric Software (PolyWorks), Dassault (CATIA V5 and SolidWorks), PTC (Pro/ENGINEER), Siemens (NX and Solid Edge), and Autodesk (Inventor, Alias, 3ds Max, Maya, Softimage).
|
USD 56,900
|
|
Laser Class—1 (eye-safe) – It can scan from 885 ft/270m at 34% reflectivity, can capture up to 1 million points per second and has a positional accuracy of 3mm at 50m and 6mm at 100m. Software—Leica Cyclone and Leica CloudWorx plug-in tools for CAD systems process your scan data, but anyone can use the free software Leica TruView.
|
USD 123,915
|
Bottom Line
There is a world of information about 3D scanning, and new applications are being developed every day. Infrared scanning is great as a low-cost entry option for the curious newbie, but in my opinion, photogrammetry provides the most universal opportunity to try 3D scanning, given the sheer ubiquity of cameras If you compare photogrammetry to the amount of technical know-how and cost of LIDAR-based 3D scanners, photogrammetry seems to be the most balanced technology, and I suspect it will continue to be used in new and interesting ways, especially given the expanding power of cloud computing.
Of course, 3D scanning also connects to emerging technologies like mixed reality and virtual reality. If you look at the Occipital Bridge, for example, or SketchFab, the online 3D modeling community that allows you to upload and view your models in virtual reality, the convergence of 3D technologies is clearly continuing on its warbling and sometimes foggy collision course.
Oh, and of course, there’s artificial intelligence, even if the computer programs that currently masquerade as “true artificial intelligence” might be better called “superficial intelligence”.. When AI enters the physical world through robotics and scanning in a way that evolves past what we see today with autonomous cars and the current state of robotics, there’s a lot to keep your eyes moving.
In the meantime, keep on scanning.