Nvidia Introduces New Chips and Technology to Accelerate AI Processing

Nvidia on Tuesday debuts new chips and technology aimed at accelerating AI performance. Read along to know more about it.
Nvidia cover photo

Nvidia launched a slew of new chips and technologies on Tuesday, claiming that they will speed up the processing of increasingly complex artificial intelligence algorithms, ratcaheting up battle with other chipmakers fighting for lucrative data center business. Nvidia’s GPUs, which were first used to help drive and improve the quality of video in the gaming business, has since become the most popular chips for AI applications.

According to the company, the latest GPU, known as the H100, can help reduce compute times for some tasks involving AI model training from weeks to days. The revelations were delivered during Nvidia’s AI developers conference, which was streamed live online.

Jensen Huang, Nvidia’s Chief Executive Officer, stated in a statement, “Data centers are evolving into artificial intelligence (AI) factories, analyzing and refining mountains of data to generate intelligence.” He referred to the H100 chip as the AI infrastructure’s “engine.”

Businesses have been utilizing AI and machine learning for anything from producing video recommendations to discovering new drugs, and the technology is quickly becoming a valuable commercial tool. Nvidia stated the H100 chip will be ready in the third quarter and will be manufactured on Taiwan Manufacturing Semiconductor Company’s cutting-edge four-nanometer technology with 80 billion transistors.

The H100 will also be used to construct Nvidia’s new “Eos” supercomputer, which will be the world’s fastest AI system when it goes live later this year, according to Nvidia. In January, Meta, the parent company of Facebook, said that it would create the world’s fastest AI supercomputer this year, with a performance of roughly 5 exaflops. Nvidia said on Tuesday that their supercomputer will have a performance of over 18 exaflops.

The ability to complete 1 quintillion (1,000,000,000,000,000,000) calculations per second is known as exaflop performance.

ALSO READ: ARM’s Next-Gen Cortex X3 Cores Might Lead To More Power-Hungry Mobile SoCs | OyPrice

Nvidia also unveiled the Grace CPU Superchip, a revolutionary processor chip based on Arm technology. It’s Nvidia’s first new Arm-based processor since the company’s failed acquisition of Arm Ltd last month owing to regulatory issues. The Grace CPU Superchip, which will be released in the first half of next year, is a bridge between two CPU chips that will focus on AI and other tasks that require a lot of processing power.

More organizations are utilizing technology to connect chips, allowing for faster data transfer between them. Apple Inc. debuted its M1 Ultra chip, which connects two M1 Max chips, earlier this month. The two CPU processors were connected, according to Nvidia, using its NVLink-C2C technology, which was also announced on Tuesday.

Nvidia, which has been developing self-driving technology and expanding its business, announced this month that it has begun shipping its autonomous vehicle computer “Drive Orin” and that Chinese electric vehicle maker BYD Co Ltd and luxury electric vehicle maker Lucid Motors will use Nvidia Drive in their next-generation fleets.

Nvidia’s vice president of automotive, Danny Shapiro, claimed the company has $11 billion in automotive business in the “pipeline” over the next six years, up from $8 billion last year. According to Shapiro, the expected revenue increases would come from hardware as well as increasing, recurring revenue from Nvidia software.

Stay tuned with OyPrice and Join us on TelegramFacebook, or Twitter and stay up to date with the latest happenings in the tech industry.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Posts