NVIDIA Loses 50% of It’s Value in Two Months but Keeps on Churning
Andrew Wheeler posted on December 02, 2018 |

Over the past few years, NVIDIA CEO Jensen Huang was beginning to look like one of the great geniuses in the history of the semiconductor industry. Having predicted that the parallel processing capabilities of GPUs would gel perfectly with the increase in narrow AI applications powering major services and applications like Amazon Web Services, Google’s DeepMind and many others who rely on deep learning, reinforcement learning or deep reinforcement learning and high-performance computing (HPC) allowed NVIDIA’s stock to skyrocket over the past 18 months.

How did NVIDIA’s value drop 50 percent in two months exactly? And will it happen again? Is it symptomatic of something amiss at the company?

After NVIDIA’s third-quarter results came out this year, investors became concerned that profit margins had dropped 290 points, though revenue was up 21 percent. But their fourth quarter guidance financing predicted a 7 percent drop in revenue. But was this combination of factors really enough to send the stock plummeting 50 percent in the last two months?

The semiconductor industry cycles through upturns and downturns in line with Moore’s Law, and the cycle might just be going through a downturn at this particular point. NVIDIA blamed the drop in value of cryptocurrencies and the chance that second place cryptocurrency (behind bitcoin itself) Ethereum, which was driving 6.7 percent of NVIDIA’s sales in the second quarter, might change its protocol this year so that it no longer needs GPUs to mine the currency.

NVIDIA’s Going Through Cyclical Downturns and Can Learn from Mismanaging Inventory for Crypto

When a cryptocurrency becomes big enough for companies like NVIDIA and rival AMD to build an application-specific integrated circuit (ASIC) for it, people with concentrated resources corner the market, and other players in the crypto-market will compete by creating a new cryptocurrency to break the monopoly by those few.

So, NVIDIA is getting hit with two cyclical downturns here, one naturally occurring as a result of being part of the semiconductor industry, and the other as a result of spending time and resources building an ASIC for successful cryptocurrencies. When another cryptocurrency becomes as successful enough for GPU companies like AMD and NVIDIA to produce an ASIC for it, NVIDIA will again profit by this upturn in cryptocurrency valuation.

NVIDIA mismanaged the surge in purchasing of GPUs by the cryptocurrency-mining community, leaving the company with a large oversupply of expensive GPUs. But since these are cyclical downturns, NVIDIA can recover from this short-term inventory mismanagement.             

Will NVIDIA Rebound Despite the Downturns?

Two months after CES 2018, where NVIDIA’s CEO Jensen Huang announced what sounded like a technology partnership with Uber, the company denied that their technology was in anyway culpable for the infamous self-driving Uber accident which resulted in a fatality. Later that year, NVIDIA posted a phenomenal amount of growth in the first quarter, with overall revenue up 68 percent from last year, and datacenter revenue up 16 percent, adding to a 55 percent increase from 2017. Visualization revenue was up 22 percent, and investors were predicting that the third quarter of 2018 would be a big one with the release of a workstation-class Quadro series of GPUS, the Quadro RTX 8000, RTX 6000 and RTX 5000.

The announcement of a new workstation-class series of GPUs is unusual for NVIDIA, which generally releases new gaming-class GPUs first and then the workstation-class, seeing as how the majority of NVIDIA’s revenue comes from the gaming industry. The new Quadro RTX series GPUs are the first three products that take advantage of the company’s new Turing GPU architecture.

At SIGGRAPH 2018, Huang revealed the Turing GPU architecture for the first time and went over many of the new features. Built for their professional visualization customers, the Turing GPU architecture’s key feature is hybrid rendering, a combination of ray-tracing and rasterization that optimizes and utilizes the strengths of both to increase performance. Turing incorporates more ray tracing hardware for faster hardware ray tracing acceleration and features a new technology Huang referred to as an “RT Core”.

The RT cores likely speed up ray-tracing object storing and can cast one Giga (10 billion)rays per second, which could mean it is more than 20 times faster than Pascal-based architecture. The tensor cores from Volta have been upgraded for the Turing architecture which not only helps speed up ray tracing, but it leverages AI to minimize the number of rays needed per scene. Besides incorporating elements of their neural networking hardware, Turing will enable the GPU to perform a wider array precisions. For example, if a workload does not require any high precisions, speedups can occur as a flexible workaround.

NVIDIA Introduces the Quadro RTX 4000

Downturns aside, NVIDIA is still cooking. On November 13th at Autodesk University NVIDIA unveiled the RTX 4000 GPU, giving developers and designers in manufacturing, engineering, architecture the ability to incorporate real-time ray-tracing into their work for the first time ever. The single-slot design fits in a wide range of workstation chassis and includes 8GB of GDDR6 graphics memory, increasing the memory bandwidth by 40 percent compared to the Quadro P4000.

At an estimated cost of around USD 900, professional users get the RTX 4000’s 36 RT cores which enable real-time ray tracing of environments and objects with reflections, refractions, global illumination and shadows. Users also have access to 288 Turing tensor cores for AI-integrated rendering other neural-network training and inference-based services and applications. (Image courtesy of NVIDIA.)
At an estimated cost of around USD 900, professional users get the RTX 4000’s 36 RT cores which enable real-time ray tracing of environments and objects with reflections, refractions, global illumination and shadows. Users also have access to 288 Turing tensor cores for AI-integrated rendering other neural-network training and inference-based services and applications. (Image courtesy of NVIDIA.)

The Quadro RTX 4000 is going to be released this month and will be a prime indicator of the company’s health in the professional visualization market.

NVIDIA Unveils TITAN RTX GPU for AI Researchers
The TITAN RTX GPU has 4608 CUDA cores and 72 Turing RT “cores” but isn’t really a successor to the previous Titan RTX XP. It has a TU102 GPU like the Quadro RTX 6000 GPU (USD 6300) for workstations in the professional visualization sector. (Image courtesy of NVIDIA.)
The TITAN RTX GPU has 4608 CUDA cores and 72 Turing RT “cores” but isn’t really a successor to the previous Titan RTX XP. It has a TU102 GPU like the Quadro RTX 6000 GPU (USD 6300) for workstations in the professional visualization sector. (Image courtesy of NVIDIA.)

After a series of well-placed leaks, NVIDIA released the TITAN RTX GPU and the reaction is mixed. Some think the GPU is way overpriced at USD 2500 and seems too similar in spec to the Quadro RTX 6000.

Two titans can be linked by the Titan RTX NVLink Bridge (USD 80), which will come in handy for AI applications and research, while the tensor cores that allow professional visualization users to produce real-time ray tracing in their graphics. For media and entertainment creators, it could be a good deal since it costs a bit more than a third of one RTX 6000 GPU. Scientists and other interested in simulated different things with larger 3D data sets might benefit from the TITAN RTX GPU. With NVIDIA covering so much ground in AI research, professional visualization and simulation, NVIDIA will surely bounce back from a bad couple of months.

Bottom Line: NVIDIA Will Rebound Quickly Next Year

At GTC Japan, Huang announced that the NVIDIA DRIVE AGX Xavier Developer Kit had arrived to market. The software and hardware allows users to develop and test their own unique autonomous driving applications. (Image courtesy of NVIDIA.)
At GTC Japan, Huang announced that the NVIDIA DRIVE AGX Xavier Developer Kit had arrived to market. The software and hardware allows users to develop and test their own unique autonomous driving applications. (Image courtesy of NVIDIA.)

NVIDIA is still incredibly strong in autonomous driving, and recently NVIDIA DRIVE AGX Xavier was selected hat the GPU Technology Conference in China by top suppliers, trucking companies and a few new automotive companies. China currently buys one-third of all vehicles worldwide per year while producing 24.5 million cars annually. One of the biggest truck manufacturers, FAW Group, is teaming up with autonomous driving startup PlusAI and logistics company Full Truck Alliance to develop a driverless trucking fleet to be set in operation in 2021. They will be using NVIDIA DRIVE AGX Pegasus to operate the vehicles for China’s long-haul transportation market.

Also, in China, NVIDIA announced that Baidu and Tencent are adopting the company’s HGX-2 server platform for high-performance computing, machine learning and AI deep learning. The mega-companies are using HGX-2 servers both internally and for cloud customers. In the US, Oracle recently announced that they will be incorporating the NVIDIA HGX-2 platform into Oracle Cloud Infrastructure to compete with AWS and give their customers a single AI and HPC computing architecture.

The NVIDIA Turing architecture is powering NVIDIA T4 Cloud GPUs, which a huge swath of the biggest tech companies in the world are using to expand their hyperscale datacenter operations. In China, these companies include Lenovo, Huawei, H3C, and in the US they include IBM, Hewlett Packard Enterprise and Dell EMC. Google Cloud recently announced T4 availability for its Google Cloud Platform customers. Major companies are adopting the Turing-based T4 Cloud GPU because it can fit into any standard server or Open Compute Project hyperscale server design (scalable up to 20 GPUs in a single node). This allows companies to perform a lot of different AI workloads, including four level of precision and inferences.

NVIDIA will learn from mismanaging their crypto-mining GPU inventory, which can almost be seen as an indulgence based on market-value of a cryptocurrency reaching a high enough peak to warrant a dedicated ASIC from major manufacturers like NVIDIA and AMD.



Recommended For You