Autodesk seeks to redefine the boundaries of multiple industries - Interview
WWI Bunker is captured with a drone and camera,
turned into a 3D model, then modified in
Autodesk Revit to design a restaurant
Shows how a 3D model of a skatepark in
San Francisco
was created using a drone, a GoPro
and Autodesk ReCap
Reality Computing is an emerging category of technology whose expanding boundaries include laser scanning, 3D modeling, 3D printing and augmented reality. It is also an approach to defining a broad transformation and convergence of techniques employed by engineers, architects and designers.
According to Rick Rundell, Technology and Innovation Strategist at Autodesk, we are only at the beginning of a long arc of change in the way we interact with physical and digital data, and in the way we experience reality itself. We recently asked Rick a few questions.
Q: What is Reality Computing?
A: Reality Computing is about capturing physical information digitally, using digital tools to create new knowledge and new designs, and then being able to deliver digital information into the physical world by materializing it through various computer controlled processes or through augmented reality.
We use the term “Reality Computing” to encompass an emerging set of workflows around the data required by 3D Printing and also data that is generated for various kinds of spatial sensing technologies. There’s been an explosion of ways that physical info can be captured digitally. For example, a week doesn’t go by that there isn’t a Kickstarter with somebody figuring out how to create spatial sensing on a mobile device.
Q: What types of data are used in Reality Computing?
A: Reality Computing relies on the kind of data that captures the rich detail and variety of the physical world in an “as used” representation in the digital environment. We take a variety of data such as point clouds, high density meshes, voxel-based structures rather than just the descriptive and analytic geometry you commonly see in a computer-aided design software tool. The exciting thing here is how this kind of data is connecting the processes of capturing physical information digitally, and delivering digital information physically.
I was just at lunch an hour ago, and met a guy who happened to be sitting at a table next to me. He said, “I just had this amazing experience. I went to the dentist and I needed a crown. He put a scanner in my mouth, scanned the tooth, milled a crown out right there, next to where I was sitting, with this little machine, and popped it right in.”
This guy is a case study in Reality Computing! Sometimes when I am making presentations on Reality Computing and I lay out the theory and someone asks, “Where is this happening?” I say, “It’s happening right in your mouth!” This is new way of working with a digital version of the physical world that’s really unprecedented in the evolution of CAD tools.
Stiles Corporation used Reality Computing to capture
existing conditions during the renovation
of a performing arts center.
To limit the center’s downtime, the firm used scanned
reality data of the facility’s existing mechanical room
and the access hallway, combined with a digital
design model of the chiller (below), to perform 4D
clash detection and carefully plan the movement
of the new unit.
Images courtesy of Stiles Corporation
Q: Have you had an experience where scanned data and simulation software yeilded a particular unseen design solution?
A: What’s becoming an increasingly common application is the use of scan data in reconfiguring factories. Think of a factory as a big machine that’s being configured to produce automobiles. For a long time, people tried to maintain 3D CAD files for planning reconfigurations and checking it. But it was really hopeless. Some electrician would come along and put a clamp on a strut to hold the wire. Nobody is going to go back into the CAD files and model that kind of information.
And yet, when you run a new chassis through the paint line, that clamp might be in exactly the wrong place, and do tens of thousands of dollars worth of damage. You know, if every chassis has a score on the side, there’s going to be issues.
Now, instead of trying to model that, a company such as Volvo doesn’t try to maintain a geometrical model of their factory. After they are done with a project, they just scan it, and they maintain a Reality Data as-built record of the factory. With the tools that are available today, you can treat that Reality Data as a model. You can interact with it, visualize it, and move through it in a way that is similar to a CAD modeling environment.
Q: Building Information Modeling is related to Autodesk Revit. Are these analogous to Reality Data Computing?
A: Building Information Modeling was created around the idea that instead of having some geometry about a building that described how to build it, you actually had a database that maintained information about the building. That data could be represented geometrically, but because it was data rather than geometry, it could also be presented in various other ways because it was data.
It was originally used for building purposes, to rehearse the construction of the building, but it could also be used for management purposes as well. For example, it was nice to have a 3D environment to measure the occupancy of building and energy efficiency.
The struggle was that for many years, people talked about using a BIM to manage an existing facility, but it was a pretty laborious task to survey and record all this data. Also, in doing this work, people would lose information about the building. If the subject building was an old timber-framed warehouse where the columns are never straight, the floors are never level, and the walls are always tipping in various directions… all of that is kind of important to know if you’re going to be doing high tolerance construction. But nobody would bother to model that.
I think the industry is changing direction from modeling the existing condition to capturing the physical world for digital use by spatial sensing. It’s a little bit of a red herring that you need a BIM to manage a building. What you really need is an as-built record. We have examples of builders who are regularly scanning buildings while they are under construction to record and create as-built records of systems that will eventually be covered up by drywall and ceiling panels and things. They accumulate that information into a much more complete record of the construction than if we had to maintain an as-built model of some kind.
Q: Do any of the different spatial sensing technologies used to capture data for Reality Computing allow you to see through materials?
A: There are a whole lot of technologies that do spatial sensing through materials to a higher or lesser degree of resolution. The cleanest example of a reality capture device is a laser scanner. Then you get into things such as GPR (Ground Penetrating Radar), which can distinguish between changes in the density of material or interfaces between one material and another underground. Acoustic sensing of geological formations, sonar used underwater and underground for various purposes could also be considered ways of capturing physical information digitally.
Q: It’s not just point cloud data, it could be slice data as well?
A: There are a lot of different formats that this kind of data can take. There are some great examples of where people have taken slice data and used it to reconstruct anatomical features. There’s the example where doctors use slice data to reproduce a model of an infant’s heart in order to practice doing surgery at a much smaller scale. There is a dental company that 3D printed dental implants, and later realized that they could create spinal prosthetics using MRIs and CAT scan data.
Once you realize that you are dealing with a generally applicable process, you can apply that to a variety of things. That’s the advantage of thinking about all of these workflows as Reality Computing. You can start seeing how an industry can evolve around scanning, 3D printing and augmented reality and the digital tools that tie them together.
Q: Why is it necessary to combine all of these technologies into one category?
A: One reason is because you see companies from all different corners and categories of this business starting to work in other areas. 3D printing companies are starting to invest in Reality Capture technology because they realize that this is a great source of three dimensional data that people might want to 3D print.
There’s a convergence of capture and delivery in Reality Computing. Spatial sensing companies such as laser scanning companies are starting to get into software tools, because the growth of their business is contingent on people being able to do something with the data that their tools capture. Software companies like Autodesk, who realize that the value of the design data that’s created by our tools will be augmented by what people can do with it through 3D printing, are starting to get into 3D printing.
Everybody is starting to get into everybody else’s businesses, and what this suggests to me is that there is a bigger idea here about how value is delivered across all of these categories. It’s important for the industry and for our customers to really understand this concept in order for them to plan their businesses strategically. That’s why I think this is a big idea that makes a lot of sense.
Q: What is your main focus with Reality Computing right now?
A: Our main focus is equipping our design tools to make Reality Data a full citizen of the design data type family, if you will. In other words, we want Reality Data to be usable in the same way that geometry is used to model a wall or an object. For example, in AutoCAD today, you can bring in a scan of a facility, and it will understand vertical surfaces that are most likely to be walls, as walls, and you can treat them like that in your model. You don’t need to build AutoCAD geometry on top of the point cloud in order to get it to behave well, it just starts behaving well automatically.
Q: Can you create something from a library of data points of scanned objects or buildings?
A: You would not create the data that way. You would use conventional CAD geometry modeling tools. But if you were renovating a building for example, you could bring in a scan of the building, start removing the stuff that you want to demolish and leave the stuff that you want to leave. Then you could start adding new things you’ve created from a CAD tool.
Today, a lot of people use Reality Data to model on top of to get an As-Built model. We are trying to eliminate that step entirely. One exciting use is when you have a physical example of something that you want to include in a model. You can scan it, and bring it into your design environment to use very directly, without having to abstract it into a model.
In construction, we have examples where people are doing renovations on a building with a large piece of equipment that they need to remove from a building, and they’re not even sure if the piece of equipment will fit through the doors of the building, or if there is even a path or a way to remove it from the premises. They capture it with a scan, and use virtual tools to see if the object will actually be able to be removed.
It’s a very exciting time to be thinking about this kind of stuff. The possibilities are just emerging. In a few years, people will just expect this kind of data to be available to them if they are going to be working with it. The example I use now is voice recognition. For decades, it was PhD level research and entire academic careers were built around it. And now, you’re just pissed off if it doesn’t work very well.
To learn more about Reality Computing go to: http://realitycomputing.typepad.com/
About Andrew Wheeler – A technology enthusiast with a penchant for 3D printing, Andrew graduated from SUNY Empire State College with a degree in Information Technology. He studied Creative Writing at New York University, and is working a documentary and accompanying publication about 3D printing.