Qualcomm Innovation Fellowship Winners Get Funding to Push Machine Learning Boundaries

Winners receive $40,000 each to pursue their research in artificial intelligence and cybersecurity.

Qualcomm Technologies, one of the world’s leading chip manufacturers, recently announced four fellowship awards to Europe-based researchers as part of the company’s Qualcomm Innovation Fellowship (QIF) program. The fellowships were granted to innovative PhD engineering students working in artificial intelligence and cybersecurity.

The winners will receive individual prizes of $40,000 each, access to dedicated Qualcomm mentors and an opportunity to present their findings to industry leaders at the company’s headquarters in San Diego.

“We are proud to see the next generation of inventors bring their work to fruition,” said Jim Thompson, chief technology officer at Qualcomm. “The machine learning and cybersecurity solutions they propose can have a significant positive impact on society. Through the QIF program we aim to break down some of the barriers that researchers encounter by providing funding and fostering collaboration.”

In this article we’ll take a closer look at two of the winners and their innovative projects.

Pushing the Limits of Continuous Kernel Convolutional Neural Networks

David Romero of the Vrije Universiteit Amsterdam, supervised by Jakub Tomczak and Mark Hoogendoorn, was selected for his proposal: “Continuous Kernel Convolution For ML.”

One of the core questions that machine learning researchers are trying to solve is how to efficiently model long-term dependencies in machine learning. These dependencies are frequently found in high dimensional data such as audio and video signals.

Standard convolutional neural networks (CNNs) are known for their superior performance in analyzing image, speech and audio signal inputs. However, they have a limited receptive field—or memory horizon—that significantly limits their ability to model long-term dependencies. CNNs can’t handle sequences of unknown size without having their memory horizon defined ahead of time.

To resolve this limitation, Romero proposes the use of continuous convolutional kernels in CNNs. A small continuous output multilayer perceptron (a type of feed forward artificial neural network) is used for the parametrization of the continuous kernel. This multilayer perceptron (MLP) deep learning method generates kernel weights that are given positions as input. Romero aims to explore if convolutional kernel CNNs can be used in applications with significant long-term interactions; these applications include high-resolution autoregressive generative modeling, large-scale imagery and video processing.

“We show that Continuous Kernel Convolutional Networks (CKCNNs) obtain state-of-the-art results in multiple datasets … and, thanks to their continuous nature, are able to handle non-uniformly sampled datasets and irregularly-sampled data natively,” said Romero in his proposal. CKCNNs can be trained in parallel and can easily tackle irregularly sampled data and data sampled at different rates.

Romero’s proposed Continuous Kernel can consider arbitrarily large memory horizons within one operation, does not rely on recurrencies, and does not suffer from other CNN limitations. As a result, CKCNNs can achieve excellent results in analyzing a large series of tasks, including stress tests and real applications with continuous and discrete data.

Midgard: An Intermediary Between Virtual and Physical Address Spaces

Siddharth Gupta of EPF Lausanne, supervised by Abhishek Bhattacharjee and Babak Falsafi, was chosen for his proposal: “Rebooting Virtual Memory with Midgard.”

As online services have proliferated at breakneck speed, the more popular ones generate massive amounts of data from their extensive user bases—and are often housed in servers with terabytes of memory capacity.

However, these data centers run into a problem: the data outgrows the centers’ translation lookaside buffer (TLB) capabilities, requiring more page table levels in memory management units (MMUs). This results in significant latency levels near the CPU cores.

Gupta’s proposal to address this issue involves introducing an extra stage of address translation between the virtual and physical address spaces, named Midgard after the Norse mythological realm between Earth and Asgard.

This intermediary data processing stage allows coarse-grain translation—combined with access control near the cores—and fine-grain translation to support fragmentation near the memories. This makes the Midgard address space the namespace for all the data in the coherence domain and cache hierarchy. And since real-world workloads are more often likely to use pages to represent their virtual address space than virtual memory areas (VMAs), the translation from the virtual to Midgard address can be done with hardware structures that are significantly smaller than TLB hierarchies—which in turn means fewer table levels in MMUs and reduced latency.

The most disruptive latencies would be moved away from the CPU cores and mitigated by caches operating in the intermediate address space. Midgard leverages VMAs, creating a single address space where VMAs for all kinds of processes can be virtually mapped.

Rather than create another level of address translation overhead, Midgard shows that memory hierarchies with large caches can actually reduce those overheads—and the bigger the TLB, the more noticeable the difference. In fact, Midgard has been shown to achieve 5 percent high overhead compared to traditional TLB hierarchies for 4 kilobyte pages when using a 16-megabyte aggregate LLC—but address translation overhead drops for cache hierarchies with higher capacity even as traditional TLBs must deal with higher degrees of overhead.

Gupta and his team envision several steps to demonstrate a fully working Midgard system. And now that his proof-of-concept paper has resulted in Qualcomm funding the project, he’ll be well-positioned to bring the concept to reality. Future objectives include addressing operating system support, verifying Midgard’s implementation on a range of architectures, and providing detailed circuit-level studies of critical hardware structures.

Meet the fellowship winners.

Machine learning is a rapidly growing field, with no shortage of inventive research to push the envelope of what a machine can do. Along with the other winners, Romero and Gupta will have a chance to further develop their disruptive and groundbreaking projects—and Qualcomm will be able to foster these innovations.

“Investment in R&D is a top priority at Qualcomm as innovation makes our technology possible,” said Thompson. “It’s exciting to see the next generation of AI and Cybersecurity researchers creating such impactful technology.”

Read more about cutting-edge artificial intelligence developments at IBM AI Report Notes COVID-19 Accelerated Automation, Anticipates Industry Investments.