Kelzal Looks to Tackle Visual Sensor Market with Event-Based Vision
Mitchell Gracie posted on March 04, 2019 |
Company secures $3M in seed funding to expand its neural-network based product line.
Utilizing neural networks to control the activation and resolution of its sensors’ pixels has allowed Kelzal to offer a 20X increase in efficiency without sacrificing speed, accuracy, security or size. (Image courtesy of Kelzal.)
Utilizing neural networks to control the activation and resolution of its sensors’ pixels has allowed Kelzal to offer a 20X increase in efficiency without sacrificing speed, accuracy, security or size. (Image courtesy of Kelzal.)

Kelzal has announced $3 million seed financing from Motus Ventures for the commercialization strategy for its new portfolio of ultra-fast and low-powered event-based sensors and appliances.

The San Diego-based start-up focuses on the development of third generation neural networks for visual sensor technology. Its new product line of visual sensors, unveiled with the seed funding announcement, includes two new breakthrough appliances.

The first, the Ultra-FAST Perception Appliance, prioritizes speed. With object recognition and classification quick enough to track a speeding bullet, this visual sensor is prepared to be integrated into the world of autonomous things, such as self-driving cars or robots on manufacturing floors.

The second product announced by Kelzal—its Ultra-Low Power Perception Appliance—runs on a single battery without drastically sacrificing accuracy or speed. Ideal implementations include surveillance and retail analysis.

“Image-based sensors usually have a technology where the pixels are larger than a conventional camera. That has basically limited the number of pixels to be utilized by the sensor,” explained Dr. Olivier Coenen—CTO, founder and interim CEO of Kelzal. “The technology we have is different: it is possibly among the smallest pixel sizes than any other event-based vision sensor that we know is out there. This is not from the electronic process, but the design of the pixel. In the future, we know that we can have a sensor with very high resolution in a compact form factor. In terms of long-term miniaturization, we hope to approach conventional visual technologies but with less of a power-burden."”

The difficulties of frame-based vision

Under the double punch of technical convergence and sensor miniaturization, the needs for sensors in the industries of security, autonomous vehicles and surveillance for retail and property grow faster and faster each day.

Some of the largest roadblocks in the way of the implementation of better sensors is the burden of balancing how data is captured, processed and stored, along with how much power is necessary for those actions to be carried out. As we all know, central and graphics processing units can require a lot of attention from power sources, spend crucial nano- and milliseconds in response time, and increase costs, while also adding weight and taking up space. 

With the market’s priorities set on miniaturization, focusing on frame-based vision in sensors slows progress.

Cue Kelzal’s shift from frame-based vision toward event-based vision.

“Accuracy and response times are critical for the AI solutions that are redefining businesses in these industries,” said Jim DiSanto, executive chairman of Kelzal and managing partner of Motus Ventures. “For those applications requiring fast, accurate, and energy efficient visual perception, Kelzal’s Perception Appliances will enable true edge intelligence without data center compute power.”

The new appliances in its product line are built around Kelzal’s core technology: event-based visual sensors. The sensors, leveraging third generation neural-network technologies, allow for reduced appetites for power, the transmission of less data, and frame rates over 100X that of consumer-grade cameras.

“Scenes with a lot of similar movement or difficult lighting, and the ability to discern between objects are common hurdles. These challenges are holding back advancements in entire industries,” said Coenen.

"An example for automotive is if you are driving on the street and the asphalt is relatively uniform, the sensor will report less often that there are changes. Further, if the sky is uniformly blue, then there is no change reported,” he continued. “Therefore, the areas on the sensor that would usually be reported are held back and thus our processor will either not expend the energy to process that data or it will capture the scene in less resolution, necessitating less power input into the device."

According to Coenen, each pixel in the sensor is smart and responds to changes in visual data. As a scene progresses without changes, not all pixels will continuously be active, and thus not sending data to be processed. This severely reduces the power needs of the sensor.

In contrast to frame-based sensors, Coenen asserts that Kelzal’s sensors are faster in response, have less latency but also provide a lot of temporal data. The result is better recognition of motion and faster processing.

“In addition to that, we can modulate the number of pixels at any moment in time and so we can monitor background information with low resolution and increase the resolution on demand,” he said to in a phone interview. “This reduces the amount of time that high resolution is necessary for precise recognition. It is a completely new way, a more intelligent way, to capture and process visual information.”

However, the devices aren’t a panacea to all problems facing the efficiency of visual sensors. While Kelzal’s neural network places a downward pressure on the number of pixels active at any given moment, it is a double edge sword. If the scene is in a state of continuous change, the neural network will constantly be activating any necessary pixels.

“For example,” elaborated Coenen, “in a scene where there are no changes at all, our appliance will use no-to-little energy, just the energy necessary to keep the system live. Frame-based systems do not have this benefit. On the other hand, if there is a lot of change, the system will draw much more power as the number of active pixels capturing change that require processing increases."

This situation makes the company’s event-based visual sensors ideal for capturing scenes where changes to the scene are either minimized, expected or restricted to specific areas in sight of the sensor.

Moreover, neither security nor privacy are sacrificed.

“We strongly believe that processing at the edge has stronger privacy and security. The reason being is two-fold. One is the data itself. We do not generate images. If someone were to hack into our system, there is no image present on the device. Without specific expertise, an eavesdropper will only see zeroes and ones. Second, we do not transmit raw data, only the results of the perception by the neural network. As an extra barrier, processed data that is transmitted by our appliances will be encrypted to the wants and needs of the customer.”

Luckily, Kelzal is up to the challenge to optimizing both its hardware and software to avoid wasteful, unnecessary data captures. The seed funding will help it improve its appliances, study edge cases and find new implementations for their sensors. In the future, Coenen hopes, its product line will include new neural network chips that help lift some of the burdensome GPUs plaguing devices beyond visual sensors.

Recommended For You