September 22, 2022
By: Jon Peddie
POSITION: Huge and frequent data sets require the massive parallel and fast processors that GPUs offer.
When most people think of CAE simulation programs, they think of FEA or CFD because those are the most commonly used applications.
FEA and CFD programs have been around since the 1950s, first on (then) large mainframes, then on timeshare systems, then on minicomputers, and finally on x86-based workstations. The basic
F = kδ The formulas of the Rayleigh-Ritz method have not changed, but the resolution of the measurement has decreased immensely by orders of magnitude and the accuracy and reliability have improved. As more performance, measured in millions of operations per second (MIPS), became available, the resolution of the models increased and, similar to computer graphics, the render time remained fairly constant. However, the addition of MIPS allowed a quick, low-resolution look to get a feel for what was going on before a full solver operation was performed.
Once the domain of specialists with high-performance computers, CAE is evolving into a workstation application for designers. Earlier this year, Jon Peddie Research conducted a series of interviews with leading CAE software vendors including Altair, Ansys, Dassault Systèmes, Hexagon and Siemens Digital Industries Software. JPR also worked with Nvidia to understand how the industry is changing in response to GPU acceleration now available in many CAE applications and workflows. The results of these interviews are available in a free eBook called Accelerating and Advancing CAE.
The transition was not quick. GPUs were introduced in late 1999 in response to high demand from avid gamers who passionately embraced 3D games. The first GPUs were designed specifically for gaming, and game developers could simply write to the GPU’s built-in functions. As application programming interfaces (APIs) evolved, these capabilities multiplied exponentially. The effect was immediate; The number of games written for GPUs increased rapidly, games ran faster and they became more beautiful. To learn more, see The History of Visual Magic in Computers: How Beautiful Images are Made in CAD, 3D, VR and AR.
With hindsight, we now know that the same transformation was on the way for engineering and scientific computing—but a lot had to happen before the design industry was ready. All software was designed for CPUs, and customers were used to running complex, resource-intensive simulation and analysis software depending on the performance of their systems’ CPUs. They were also used to analytics software being complex and taking a long time to run.
Everything goes faster
As the pace of innovation accelerates, each subsequent generation of GPUs will gain new features useful for CAE, including hardware-accelerated matrix math and AI, faster memory, and higher bandwidths. Software tools for programming GPUs are also becoming more widespread. Nvidia’s CUDA was introduced in 2007 and enabled the development of specialized libraries for use across the computing universe. AMD has been working on open software approaches, as has Intel, which is introducing new powerful GPUs to complement its CPUs.
In our interviews with the CAE companies, we’ve been told that GPUs outperform CPUs by multiples depending on the specific tasks, but despite this obvious advantage, we’ve also been told that some engineers worry that GPU-accelerated applications will have to pay for the speed boost with accuracy. That didn’t come true. Instead, developers and their customers find that the results of GPU-accelerated calculations are just as accurate as those of CPU-based solvers.
GPU acceleration enables workstations to perform the tasks previously performed by high-performance computing (HPC). As a result, carrying out these works can be cheaper in terms of energy consumption and financial costs. The ability to do more iterations more cheaply and sustainably enables more designers to take advantage of simulation earlier in the design process and have confidence in the results.
Industries don’t change overnight, and the CAE industry is a particularly good example. Some of the products are based on very old code originally written for CPUs in the 1960s and 1970s. How these companies use GPUs can vary. The companies we interviewed told us that they are in the early stages of working with GPUs for CAE. They’ve long known the benefits of GPUs, but they’re even more aware of the pitfalls of moving too fast. Developers need to consider the installed base of hardware at their customers and what kind of problems they are trying to solve.
Since their inception, GPUs have evolved to support all digital industries. They got more transistors, bigger memory and faster bandwidth. New types of accelerator cores such as Nvidia’s CUDA cores and AMD’s Stream processors have been developed, and Tensor cores have been added to speed up AI/ML applications. Real-time ray tracing cores were introduced and a wealth of software tools and libraries developed for developers. The net result is that the GPU has jumped ahead of the CPU by reducing the time it takes to process simulation meshes, as shown in the graph below.
As the number of cores increases, the computing time decreases and when used correctly, the GPU offers amazing acceleration. Click on the image to enlarge it
Some of the ISVs are sticking with CPUs, some of the newer companies and new programs are moving to the GPU. The most sensible approach, in our opinion, is a hybrid approach that allows the user to utilize whatever GPU capabilities they have.
There are several clear takeaways from this project, the use of GPU acceleration in mainstream CAE products is increasing. And we’re also seeing the development of new products that have been rewritten from the ground up to take advantage of GPUs. Sustainability has become an important aspect for developers and customers. Finally, CAE has the potential to become a more integrated part of the design process, leading to better designs and more sustainable products.
In a few years, this discussion will become an amusing historical footnote. Engineers, mathematicians and scientists will smirk over drinks and say, “Remember when we ran simulations on a single processor – boy, those were the bad old days.”