Body Labs Launches a Human-Aware AI Program

SOMA is an app designed to leverage its database of human shapes in 2D and 3D and place users into augmented reality.

Body Labs used its technology to convert scan data from sensors. Laser scanners, photogrammetry, and RGB-D sensors all gave great inputs but consumers were adopting the technology very slowly. This inspired the company to lower the entry barriers into the body capture and AR fields by modifying their research models. Taking data from any camera allowed convolutional neural networks to study and predict 2D locations and 3D joint rotations. Fitting the company’s Skinned Multi-Person Linear Models (SMPL) onto predicted shapes let the company find new ways to augment reality.

Earlier this month Body Labs released SOMA, their human-aware platform for augmented reality. Thousands of human scans, images, body textures and demographics make up the shape and motion database at the heart of SOMA. The tech can be used to detect 3D motion from user-generated videos, predict 3D shapes from pictures, and give hardware or software the ability to understand body gestures without needing voice prompts. Mobility applications can also use the technology to detect pedestrians and predict their behavior.

Jon Cilley, VP of Marketing and Chief Evangelist at Body Labs answered a few questions for us about SOMA and its development. He said that current augmented reality enhances and modifies a captured environment, or provides additional context within these environments. Body Labs felt that people were the most important part of the AR experience but current technology was struggling to find ways to utilize the human factor. Once the structural components were in place the critical design decision was figuring out where to target the technology transfer. Gaming, apparel and AR industries were targeted because the team felt immediate applications already existed in those industries.

The Mosh app is already using this technology for a personal experience. Unlike Snapchat’s rainbow puking face recognition Mosh takes a full body scan and lets the user add butterflies, spacesuits or sparkling flowers. SOMA is fascinating both as a new technology and for the possibilities that we haven’t even considered yet. There are videos on the website of users transforming from human to video game buff, and using a fireball to knock over opponents. Shoppers can instantly try on several outfits based on a single user picture. The company’s research page shows several studies that were done for datasets, anthropometry, pose recognition and segmentations. I can easily see this technology being applied to ergonomic or biometric studies in a manufacturing setting, or used to develop new sports and fitness equipment.