Meet the AI That Turns a Body into a Digital Platform
Michael Molitch-Hou posted on October 14, 2016 | 6416 views

However achingly slow, the bright technological future painted by science fiction is beginning to emerge. Autonomous cars have begun to hit the roads, and friendly artificial intelligence (AI) could represent humanity's first contact with alien life. Not too long from now, we may be able to have our clothes or even prosthetics customfitted through the use of 3D scanning and printing.

One company working to make this last premise a reality is Manhattan-based Body Labs, one of the few firms developing the technology for digitally and accurately representing the human form. Whether it be for designing personally-tailored clothing or realistic virtual reality avatars, Body Labs uses AI and machine learning to “collect, digitize and organize all of the data and information related to human body shape, pose and motion.”

The firm's most recent announcement is for a product called “Red,” which integrates 3D scanning hardware with Body Labs' AI software to optimize the technology for such applications as custom retail, medicine, gaming and animation, and research and development.

The possibilities of the human body as a digital platform are fascinating, but just what that concept entails could use some elucidation. For that reason, ENGINEERING.com spoke with Jon Cilley, director of marketing at Body Labs, to learn more.

An AI for the Human Body

Body Labs was born out of the work performed by Michael Black at Brown University and the Max Planck Institute, where he began developing a machine-learning algorithm for understanding the human body. Specifically, the technology was first created to help solve a crime, according to Cilley.

“When [Black] was teaching a computer vision course at Brown University, law enforcement came to him and his class to see if they could figure out the size and the shape of a person who was suspected of robbing a convenience store,” Cilley said.

Based off of security camera footage of the suspect, law enforcement wished to approximate the height, stature and shape of the individual. To gather this data, Black trained machine-learning algorithms to study thousands of 3D scans of real living people in a variety of poses and create a statistical model of the human body shape.

The research evolved over the course of 10 years until Black co-founded Body Labs, where he still serves as a science advisor and board member. Body Labs builds off of this initial research on machine learning and the human body to commercialize the technology for a variety of uses.

Now, with a high-end laser 3D scanner, a consumer-grade depth sensor or even a measuring tape, it's possible to rely on Body Labs' software to generate a 3D model of customers or individuals.

The Blue and Red APIs

Before Body Labs' newly announced Red application program interface (API), there was the Blue API, which essentially allows anyone without a 3D scanner to produce an approximate 3D model of themselves. The way it works is quite intriguing.

Body Labs' software begins with a watertight template mesh of the human body. This generic model serves as the basis for all new models that may be created with Body Labs technology. In the case of the Blue API, a user can enter six specific body measurements (height, weight, waist, inseam, chest and hip) and the Body Labs AI will use its increasing knowledge of the human form to fill in all of the missing data and generate an approximate body shape.

An image depicting differences in sizes of garments based on brand. (Image courtesy of Body Labs.)
An image depicting differences in sizes of garments based on brand. (Image courtesy of Body Labs.)

In the 3D printing world, this API was famously used by design firm Nervous System, which designed a fully 3D-printed dress that is now on display at the Museum of Modern Art. Nervous System went on to create an app relying on the Blue API that would allow potentially anyone online to order a custom-fitted, 3D-printed garment, simply by entering their measurements.

The Red API takes this process a step further by replacing those body measurements with an actual 3D scan (or 4D, in the case of temporal 3D scans) of the human body. A high-end laser scanner or a consumer-grade depth sensor can capture a 3D model of an individual, but Body Labs’ Red API adds some important value to those scans.

An imperfect scan. (Image courtesy of Body Labs.)
An imperfect scan. (Image courtesy of Body Labs.)

First of all, Body Labs' software can clean up and repair a 3D mesh in ways that may improve on other existing techniques. “When you take an initial scan,” Cilley pointed out, “the data just exists as points in X-Y-Z space, so making the model watertight is important so that computers and software can understand that it’s a 3D object. Also, when scanning companies use things like hole fill-in techniques to make a scan watertight, what they tend to do is essentially connect the dots between points, but, with this method, you get these really lumpy models that aren’t true to life like the way that you would actually look in reality.”

Rather than fill in the missing data by connecting the dots, Body Labs' software relies on its initial mesh template and the AI’s knowledge of the human body to repair the mesh in a way that more accurately reflects the human form. Cilley described the process of applying Body Labs' software to a 3D scan as a sort of alignment. The template mesh is morphed and adjusted based on the AI's database of 3D scans until it resembles the actual 3D scan. With machine learning, the AI then has an even more refined understanding of the vast spectrum of human bodies populating the planet.

Interestingly, by morphing and adjusting an existing template to match the 3D scan, Body Labs is creating a standardized model for digital representations of the human body. As Cilley explained, “Every person that is recreated into a digital avatar, whether with measurements or a laser scanner or depth sensors, everyone starts from the same template mesh. The value of doing that is we deliver something that we refer to as ‘point-to-point correspondence.’”

Regardless of how different two 3D avatars may be, the point data will be represented in identical locations. As Cilley elaborated, “If we both scanned ourselves, your right shoulder might be point 3000 in your mesh, and when I create my avatar, point 3000 is also my right shoulder on my mesh.”

This brings a new level of standardization across applications, ultimately simplifying the software for 3D printing, virtual reality or apparel design. For instance, if a clothing brand wants to get a better understanding of the measurements of its customers from a macro perspective, this standardization makes it easier to, say, average the sizes of customers who purchase “small” t-shirts. This average can then be used to tailor future “small” t-shirts more effectively.

Applications of Body AI

A recent application of the Red API in action took place at New York Fashion Week during the SS17 Hyperwave collection runway show of clothing studio Chromat. Chromat, which has leveraged 3D scanning in the past, used a Structure Sensor, an affordable, iPad-compatible depth sensor from Occipital, to digitize dancer Mela Murder.

Due to the Red API, the resulting model was free from noise and watertight before Chromat's clothing was digitally designed and applied to Murder in various poses. From a clothing production standpoint, this made it possible to create clothes for the dancer without physically measuring her; however, the model was also 3Dprinted live on Formlabs 3D printers during the runway show while artists Pussykrew projected an animation made from the same 3D data.

Other users of Body Labs' software is the U.S. Army, which is taking advantage of the point-to-point correspondence possible with the company's technology. Cilley explained that the Army is using the software to design flak jackets for female soldiers in combat, as previous flak jackets have not been optimized for the female form. Body Labs and its technology is also a listed Under Armour Lighthouse Development Partner. 

While Body Labs is currently focused on clothing at the moment, the applications extend far beyond that space. Cilley pointed towards custom prosthetics and insoles as two areas in which Body Labs' software can be used. The exact implementations, however, will partially be determined by who chooses to purchase the firm’s API packages.

The Future of the Human Body

The first major 3D scanning company to implement Body Labs’ technology is 3dMD, which has an extensive number of 3D and 4D scanners for a wide variety of purposes. By integrating Body Labs’ Red API into its products, customers will be able to have scans directly processed by Body Labs’ AI to take advantage of the aforementioned features.

As Body Labs’ continues its research, partially fueled by an $11 million funding round led by Intel Capital that occurred last year, the startup will be able to expand the possibilities of its software. For instance, because of its work with Intel, Body Labs has been developing applications for Intel's RealSense depth-sensing camera, built into a number of tablets and computers.

While depth-sensing consumer devices, such as the Phab 2 Pro from Lenovo, will begin to enter the market, Cilley said that Body Labs wants to increase access to even more consumers, like the vast majority that don’t have phones with depth sensors builtin. For this reason, the startup is also exploring methods for extrapolating 3D models from 2D images.

Because Body Labs’ software has existing advantages for capturing the human form due to its machine-learning AI and datasets of human bodies, photogrammetric techniques for creating 3D models of people may be easier to recreate with Body Labs’ technology than other similar programs, like Autodesk 123D Catch.

Moreover, if Body Labs' approach to capturing 3D data were to be expanded from humans to objects, such as furniture, it’s not impossible to imagine a Furniture Labs–style software for recreating chairs, desks and sofas. Cilley made no mention of a Furniture Labs API in the works, but he did say that 2D to 3D capture is in the works and we may even see something as early as next year.

When we will be able to 3D scan ourselves with our smartphones and order a piece of clothing, on the other hand, is partially up to the developers.


Recommended For You