New Technique Helps AI Learn Almost 70 Per Cent Faster

Adaptive Deep Reuse method reduces training time without sacrificing accuracy.

Researchers at North Carolina State University have developed a method to help deep learning networks learn almost 70 per cent faster without sacrificing accuracy. The new technique is called Adaptive Deep Reuse.

“One of the biggest challenges facing the development of new AI tools is the amount of time and computing power it takes to train deep learning networks,” said Xipeng Shen, professor of computer science at NC State and co-author of a paper on the work. “We’ve come up with a way to expedite that process.”

A deep learning network learns by breaking down a data sample—for example, an image—into pieces of consecutive data points. For an image, that means dividing the image into blocks of pixels adjacent to each other. Each fragment of data is processed by a computational filter, then again by a second filter. This continues until all of the data has been run through the filters. The network then uses the filtered fragments to draw a conclusion about the data sample—or image—as a whole.

Each data set could have millions of data fragments. And a deep learning network will run through the set thousands of times to fine-tune its analysis. Not surprisingly, the whole exercise requires vast amounts of computational power.

Interview with Geoffrey Hinton, “Godfather” of Deep Learning.

Adaptive Deep Reuse involves finding commonalities across similar data chunks: for example, a cloud in one picture will look like a cloud in another part of the image or in another image in the data set. A deep learning network could apply filters to the first section of cloud and apply them to all the clouds it identifies throughout the data set. This could save a lot of time and work by the computer.

“We were not only able to demonstrate that these similarities exist, but that we can find these similarities for intermediate results at every step of the process,” said Lin Ning, Ph.D. student and lead author of the paper.

The researchers began by looking at fairly large chunks of data using a low threshold for determining similarity. With each repetition of the data set, they made the chunks smaller and raised the similarity threshold incrementally using an adaptive algorithm.

The team tested their method on three deep learning networks and data sets. Adaptive Deep Reuse cut training time by between 63 and 69 per cent—all without accuracy loss.

“We think Adaptive Deep Reuse is a valuable tool, and look forward to working with industry and research partners to demonstrate how it can be used to advance AI,” said Shen.

Read more about developments in AI at China Is Overtaking the U.S. in Artificial Intelligence.