Nvidia’s Ada Lovelace GPU generation: $1,599 for RTX 4090, $899 and up for 4080


Time to pull out the checkbook again, GPU lovers.  The RTX 4090 is here (and it's not alone).
Enlarge / Time to pull out the checkbook again, GPU lovers. The RTX 4090 is here (and it’s not alone).

NVIDIA

After weeks of teasing, Nvidia’s latest computer graphics cards, the “Ada Lovelace” generation of RTX 4000 GPUs, are here. Nvidia CEO Jensen Huang unveiled two new models Tuesday: the RTX 4090, which will start at a whopping $1,599, and the RTX 4080, which will launch in two configurations.

The more expensive card, due to launch on October 12, is in the same high-end category as Nvidia’s 2020 Megaton RTX 3090 (formerly dubbed the “Titan” product by the company). The increase in physical size of the 4090 requires three slots on your PC build of choice. Specs indicate a high-end GPU: 16,384 CUDA cores (vs. the 3090’s 10,496 CUDA cores) and 2.52 GHz boost clock (vs. 1.695 GHz on the 3090). Despite the improvements, the card still performs within the same 450W power range as the 3090. Its RAM allocation remains at 24GB of GDDR6X memory.

Also Read :  This Morning reveals handy devices that cost under £20 including a tap aerator and chimney balloons

This leap in performance is fueled in part by Nvidia’s long-rumored move to TSMC’s “4N” process, a new generation of 5nm chips that offer a massive efficiency leap over the previous Ampere generation’s 8nm process.

The RTX 4080, which comes in two SKUs.
Enlarge / The RTX 4080, which comes in two SKUs.

NVIDIA

The RTX 4080 will follow in November in two SKUs: a 12GB GDDR6X model (192-bit bus) starting at $899 and a 16GB GDDR6X model (256-bit bus) starting at $1,199 -Dollar. However, based on how different the specs are between these two models, Nvidia appears to be launching two completely different chipsets under the same “4080” banner; Traditionally, this major Nvidia hardware differentiation has been associated with separate model names (e.g. last-gen 3070 and 3080). We’re waiting for a clearer confirmation from Nvidia on whether the 4080 models share a chipset or not.

The more expensive of these two will have more CUDA cores (9,728, versus the RTX 3080’s 8,704), boost clock (2.51 GHz, versus the 3080’s 1.71 GHz), and power consumption (320 W, same as the 3080 but more than the lower) include -storage 4080 285 W). At least for this generation, Nvidia offers 12 GB of memory as the 4080 baseline – effectively addressing a major criticism of the consistently memory-hungry RTX 3000 GPU generation.

Also Read :  Tailor-Made Devices For Injured Soldiers

micromap, microgrid

Both new models include iterative updates to Nvidia’s proprietary RTX chipset elements – RT cores and Tensor cores. Nvidia has also announced updated processes for handling real-time ray tracing in 3D graphics as well as for its Deep Learning Super-Sampling (DLSS) upscaling system. The former is extended to Lovelace GPUs with two new types of hardware units: an “Opacity Micromap Engine” designed to double raw ray tracing performance, and a “Micromesh Engine” to increase the amount of geometric coverage “without the memory cost”. on the rendering front.

The latter has now been brought to a new version: DLSS 3, which appears to be an exclusive feature of the RTX 4000 series. According to Huang, this system promises to “generate new frames [of gameplay] without involving the game, effectively boosting both CPU and GPU performance.” This real-time, frame-by-frame graphics restoration aims to solve the problems encountered with image reconstruction techniques that do not necessarily capture the motion vectors of elements in the Game, such as particles.This new method, combined with the techniques of previous generations of DLSS, could be used to reconstruct a whopping 7/8ths of a scene’s pixels – and thereby lower the computational demands on GPUs and CPUs.If it Working as promised, DLSS could deliver a major improvement over the pixel-by-pixel process that has been so successful in both DLSS and its competitors – AMD’s FSR 2.0 and Intel’s forthcoming XeSS.

Also Read :  Supermicro Expands Its NVIDIA-Certified Server Portfolio with New NVIDIA H100 Optimized GPU Systems; New Servers Boost AI Training Performance by up to 9x

In addition to these proprietary systems, Nvidia’s latest GPUs will apparently rely on a new process Huang calls “shader execution reordering.” While this will improve performance for raw rasterization, Huang’s brief description of the system depends largely on the computationally intensive workloads of ray tracing. The system will boost “2x to 3x” the ray tracing performance of the company’s previous Ampere generation GPUs.

As part of today’s announcements, Nvidia has unveiled a few titles with upcoming Nvidia RTX-specific tweaks, and arguably the biggest one is Microsoft flight simulator 2020. In combination with the image reconstruction system of DLSS 3 MSFS was demonstrated with framerates easily hitting the 100s in some of the game’s busiest scenes (although raw rasterization of the same scenes on an unnamed Lovelace GPU and PC also showed solid performance considering how power-hungry this game is and its massive urban landscapes can be ). The gallery above is shot in 4K, so you might want to click and zoom in to see how DLSS 3 handles fine pixel detail compared to an apparent combination of raw pixels and default temporal anti-aliasing (TAA) in older systems.

Huang also tried to ingratiate himself with the PC modding scene by demonstrating how a new toolset developed by Nvidia called RTX Remix can be applied to a range of classic games whose modding abilities are wide open. The best results were achieved with a before and after demonstration Elder Scrolls III: Morrowind, which used RTX GPU Tensor Cores to programmatically update the game’s textures and physical material properties. (See above or visit Nvidia’s website for even more before and after comparisons.) Similar results are to be expected from a free version gate 1 DLC pack that Nvidia will release for fans to apply for this PC gaming classic in November. We’ll be excited to see how Nvidia’s automodding results compare head-to-head with the best stuff from over a decade of community development.



Source link