Next-gen GPUs Thrust Nvidia Ahead in World of Simulation, Games


Nvidia made several announcements at its GTC event this week, highlighted by the GeForce RTX 40-series GPUs with Ada support. Nvidia CEO Jensen Huang said in a keynote speech that the GPUs would provide a significant performance boost that developers of games and other simulated environments would benefit from.

During the presentation, Huang put the new GPU through its paces in a fully interactive simulation of Racer RTX, a simulation that is fully ray-traced and where all the action is physically modeled. Ada advancements include a new streaming multiprocessor, an RT core with twice the throughput of Ray Triangle intersections, and a new Tensor core with the Hopper FP8 Transformer Engine and 1.4 petaflops of Tensor processing power.

Ada also introduces the latest version of NVIDIA DLSS technology, DLSS 3 that uses AI to generate new frames by comparing new frames to previous frames to understand scene changes. This feature improves game performance up to 4x over brute force rendering.

The GeForce RTX will be available in multiple configurations. The top-of-the-line 4090 for high-performance gaming applications will be available mid-October for $1,599. The GeForce RTX 4080 will be released in November with two configurations. The $1,199 GeForce RTX 4080 16GB features 9,728 CUDA cores and 16GB of high-speed Micron GDDR6X memory.

Also Read :  Nano Labs Announces Breakthrough Achievement of iPolloverse Distributed Rendering

Nvidia will also offer the GeForce RTX 4080 in a 12GB configuration with 7,680 CUDA cores for $899.

Omniverse Cloud SaaS

The company also announced this new cloud services to support AI workflows. NVIDIA Announced Its First Software and Infrastructure-as-a-Service Offering – called NVIDIA Omniverse™ Cloudthat empowers artists, developers, and enterprise teams to design, publish, operate, and experience metaverse applications everywhere. With Omniverse Cloud, individuals and teams can design 3D workflows and collaborate without requiring local computing power.

Omniverse Cloud Services run on the Omniverse Cloud Computer, a computer system consisting of NVIDIA OVX™ for graphics and physics simulation, Nvidia HGX™ for advanced AI workloads and the NVIDIA Graphics Delivery Network (GDN), a globally distributed data center network to deliver high-performance, low-latency Metaverse graphics at the edge.

Also Read :  ‘Round the World and Back Again

Rise of LLMs

During his keynote speech, Huang also pointed to the growing role of large language models, or LLMs, in AI applications that power processing engines used in social media, digital advertising, e-commerce and search. He added that large language models based on the Transformer deep learning model, first introduced in 2017, are now driving leading AI research because of their ability to understand human language without supervision or labeled datasets.

To make it easier for researchers to apply this “incredible” technology to their work, Huang announced the Nemo LLM Service, a cloud service managed by NVIDIA to customize pre-trained LLMs for specific tasks. For drug and life science researchers, Huang also announced BioNeMo LLM, a service for creating LLMs that understand chemicals, proteins, DNA and RNA sequences.

Huang announced that NVIDIA is collaborating with The Broad Institute, the world’s largest producer of human genome information, to make NVIDIA Clara libraries such as NVIDIA Parabricks, Genome Analysis Toolkit and BioNeMo available on Broad’s Terra Cloud platform.

To power these AI applications, Nvidia will begin shipping its next-generation NVIDIA H100 Tensor Core GPU powered by Hopper’s Transformer Engine in the coming weeks. According to the company, the partners building systems include Atos, Cisco, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Lenovo and Supermicro. Additionally, Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure will begin deploying H100-based instances in the cloud starting next year.

Also Read :  Steve Case says the next five years will be about 'the internet meeting the real world’

Powering AV systems

For autonomous vehicles, Huang introduced DRIVE Thor, which combines Hopper’s transformer engine, Ada’s GPU, and Grace’s amazing CPU.

The Thor superchip delivers 2,000 teraflops of power, replacing Atlan on the DRIVE roadmap and providing a seamless transition from DRIVE Orin, which has 254 TOPS of power and is currently used in production vehicles. According to Huang, the Thor processor will power robotics, medical instruments, industrial automation and edge AI systems.

Spencer Chin is Senior Editor for Design News and covers the electronic beat. He has extensive experience in developing components, semiconductors, subsystems, power supply and other facets of electronics from both a business/supply chain and technology perspective. He can be reached at [email protected]



Source link