Neural Networks Tool Box Comes to MATLAB and Simulink

Deep learning push from MathWorks is fueled by IoT.

In a recent R2017b release by MathWorks, the team announced that it had added support for a collection of deep learning applications. What this translates to is MATLAB and Simulink gaining a Neural Network Toolbox.

The Neural Network Toolbox will be able to support various complex architectures such as long short-term memory (LSTM) networks, directed acyclic graph (DAG) and pretrained models like GoogLeNet.

“With the growth of smart devices and IoT, design teams face the challenge of creating more intelligent products and applications by either developing deep learning skills themselves, or relying on other teams with deep learning expertise who may not understand the application context,” said David Rich, MATLAB marketing director, MathWorks.

Internet of Things (IoT) engineers, and others interested in deep learning, will also be able to use a new product called the GPU Coder to convert deep learning models into CUDA. This will allow the deep learning data to be crunched by NVIDIA GPUs. MathWorks reports that by using a GPU to crunch the deep learning data, its benchmarks were up to seven times faster than using TensorFlow and four-and-a-half times faster than Caffe2.

“With R2017b, engineering and system integration teams can extend the use of MATLAB for deep learning to better maintain control of the entire design process and achieve higher-quality designs faster,” said Rich.

Another deep learning addition to MathWorks comes in the form of the Image Labeler app in the Computer Vision System Toolbox. Users can now label their ground truth data in a sequence of images for object detection workflows. Additionally, the tool will be able to handle semantic segmentation for deep learning, which will be able to detect pixel regions or evaluate and visualize segmentation results. 

“Using MATLAB can improve result quality while reducing model development time by automating ground truth labeling,” added Rich.

In combination to improvements in the R2017a release, users will now be able to use pretrained models to transfer learning. This is compatible with convolution neural networks (CNN) such as AlexNet, VGG-16, and VGG-19, or Caffe networks like Caffe Model Zoo. Models can also be developed to use CNN for classification, object detection and regression of images.

This release also has some new analytics tools for MATLAB. The Text Analytics Toolbox has an extensible data pool, new data plots, and additional algorithms for machine learning. The tool also now supports Microsoft Azure Blob storage.

These additions to MATLAB and Simulink mean that engineers will be able to develop IoT devices that are better and smarter, and they will be able to learn at a faster rate. This means that the software will find patterns to improve designs and the user experience when you need them.

For more information on MATLAB and Simulink, follow this link.

Written by

Shawn Wasserman

For over 10 years, Shawn Wasserman has informed, inspired and engaged the engineering community through online content. As a senior writer at WTWH media, he produces branded content to help engineers streamline their operations via new tools, technologies and software. While a senior editor at Engineering.com, Shawn wrote stories about CAE, simulation, PLM, CAD, IoT, AI and more. During his time as the blog manager at Ansys, Shawn produced content featuring stories, tips, tricks and interesting use cases for CAE technologies. Shawn holds a master’s degree in Bioengineering from the University of Guelph and an undergraduate degree in Chemical Engineering from the University of Waterloo.