Stanford researchers develop technique to help autonomous cars see hidden objects around corners.
The developers of self-driving cars may be chasing advances in laser technology, but when it comes to seeing around corners, a clever algorithm is putting current laser mapping techniques to shame.

Light Detecting and Ranging technology, or LiDAR, uses the varying return times for light emissions bouncing off objects in the environment to build a 3D map of a self-driving car’s surroundings. The primary problem autonomous vehicle engineers face is determining which of the millions of generated data points to use. Up until now, LiDAR systems such as those used by Google have only analyzed the photons that bounce directly off an object to map the area around a vehicle.
What about the scattered photons that hit objects out of the sensor’s line of sight and bump into several other surfaces on their return trip? Previous attempts to sort signal from noise in these situations required hours of computational time, but a new algorithm, called Light Cone Transform (LCT) Reconstruction, reduces the required computing power to such a level that it can be implemented in the simple, reliable computers a truly autonomous car must rely upon.

LCT Reconstruction takes non-line-of-sight data that is normally resolved by multiplying two staggeringly large matrices together, and transforms them into an equation that can be resolved regardless of the dimensions of the starting matrices. This means that the shape of an object can be determined from the arrangement of millions of reflected photons within a matter of seconds.
A self-driving car using this technology would be able to sense not only something such as a child playing on the side of the road, but also a child ducking behind a bush to retrieve a lost ball before dashing heedlessly across the street to rejoin a soccer game. LCT Reconstruction and LiDAR would be able to tell an autonomous vehicle to slow down when driving through a potentially risky situation, the same way that a human driver would in such a situation.

Researchers in the Stanford Computational Imaging Lab believe that their algorithm can be integrated into current LiDAR systems on Google self-driving cars. However, they say that before the system can be widely applied, they will need to increase its accuracy in detecting nonreflective objects. As it is, researchers are confident that the LCT Reconstruction algorithm will allow LiDAR systems to recognize street signs and reflective vests from around a corner.
For more on autonomous vehicle technologies, check out the following:
The Road to Driverless Cars – 1925 – 2025
What Tech Will It Take to Put Self-Driving Cars on the Road?