In latest benchmark test of AI, it’s mostly Nvidia competing against Nvidia

mlingf-trining-and-hpc-nvidia-deck-deck-slide-9

For the lack of some of the NVIDIA’s rich competition in the latest MLUDF in the latest MLUPF against itself by comparing its latest GPU H100 “HPUPER” to existing products A100.

Nvidia

Although the giant chip nvidia tend to throw a long shadow of artiencing, its ability to promote the game’s competition, may increase if the latest target competition is an indication .

MLCOMMONS, the industrial allies of the popular experiment of MLUPF, released the latest numbers for “training” of artificial nerve nerves. The baking shows the least number of opponents that NVIDIA in 3 years is only one of the Intel Giants CPU.

In the past round, including the most recent in June Nvidia, there are two or more competitors that are up, including Intel, or Tpu chips and chips from British companies, Graphcore. And past Chinese telecommunications companies Huawei.

Also: Google and Nvidia have broken top scores in the training standards of MLPERF AI

For lack of competition, Nvidia over the top score, while in June, the company shares the top rank With Google. NVIDIA has sent its system using its A100 GPU has been issued for many years and has been known for the “Hopper” GPU in Pioneer Pioneer Grace Hopper. H100 has earned the highest score in one of eight trials for the so-called so-called guidelines to introduce products to people online.

Intel offers two systems using its Habana Gaudi2 chips and labeled “preview” named “Sapphire Rapids”.

Intel systems are slow than Nvidia.

“H100 GPUS (AKA Hopper) has created a world record of the eight mlingf enterprises. They offer up to 6.7x more than the previous GPUS time They were first transmitted on the MLUPF training. The same comparison A100 GPUs today has a more 2.5x muscle due to software advanced. “

During the official press conference, Nvidia, the Senior Process Manager for AI and Cloud focused on improving the performance of Hopper and program modification to A100. Salvatore has shown how Hopper accelerates the A100, which is the NVIDIA test against NVIDIA in the NVIDI2 and Sapphire Rapids.

Also: Graphcore brings new competition to NVIDIA in MLPERF AI Benchmarks Latest

The absence of different sellers is not a sign of trends given that in the past round of MLPERF, each seller has decided to skip the game to return in the next round.

Google did not respond to the ZDNet proposal for comment on why they did not attend.

The Graphcore email told ZDnet he had decided that it would have had a better place to give up the time of its engineer than a few weeks or several months it needed to arrange the transmission for MLPERF.

Graphcore’s communications chief of the communications chief, told ZDnet by email, “the problem of returning returns” in the meaning of the inevitable leap. “

McKenzie told Zdnet that Graphcore “could join in the future mflperf game, but now it does not reflect the part of the AI, which we are seeing the most exciting growth.” Mckenzie told ZDNet. Mlperf tasks are just a “table stock”.

Instead, he said, “We really want to focus on our energy” on “unlocking new capabilities for the AI.” McKenzie said, “You can expect to see an exciting progress soon,” Example of the model, “McKenzie said, for example, in sampling, as well as GNNs” or Graph neural Networks.

Also: Nvidia CEO Jensen Huang announces GPU availability of the ‘Hopper’ cloud service for Model AI large

In addition to the NVIDIA chips that govern all computers of computer systems that achieve the top scores are made by Nvidia rather than from the partner. That is also a change from the past round of base test. Usually, some sellers like Dell will get the highest score for the system they put together using Nvidia chips. At this time, no system sellers can overcome Nvidia to use Nvidia’s personal chips.

The MLPERF training test reports whether it takes to adjust the “weight” or the nerve parameters until the minimum correctly required on the process called ” Training “Neurology Networks where one. Short time is better.

Although the top scores often capture the title – and are emphasized by the media by seller – the resulting MLPERF results include a large number of scores.

In MLCOMMons CEO conversations, David Kanter told ZDNet from focusing on top scores. Said Kanter the value of the base suit for the company being evaluating the AI ​​hardware is to have a wide range of various sizes with various types of processes.

Hundreds of submissions range from a few typical microfrocessor, until the engine with thousands of machines and NVIDIA GPUs, which are the type of system achieved.

Kanter told Zdnet “When it comes to training, ML and the conclusion it has a lot of demand for the different levels of practice,” and part of the goal is to provide the measures available at all levels That’s different. “

“It’s worth more in information about some small systems, such as in a big scale system,” Kanter said. “All of these systems are relevant and are the same importance but probably for different people.”

Also: Benchmark test of the practice of AI, MLPERF, continue to get involved

For Graphcore and Google’s lack of joining, Kanter said, “I would like to see more submission,” adding more, “I understand for many companies, they may have to choose from how they invest about engineering resources “.

“I think you will see these things down and flow on different rounds,” Kanter said.

The interesting minorality of competition in the competition for NVIDIA means some of the top training scores not only previously showing improvement but falls.

For example, in the Imagenet work, the nerve network is trained to provide the tags to the pics of millions, the top result is the only result in June NVIDIA that takes 19 seconds to train. The result in June followed after the chip “TPU”, which came to 11.5 seconds and 14 seconds.

When asked about repeating the previous submission, Nvidia told ZDNet in the email that its focus was on the H100 chip at the moment, not A100. Nvidia also noticed evolving since the first A100 outcomes in 2018. In the round of training, NVIDIA 8 NVIDIA 8 has spent 40 minutes to train Resnet-50. In this week’s results, that time was cut under thirty minutes.

mlperf-trining-and-hpc-nvidia-deck-deck-slide-11

NVIDIA also talked about its speed advantage versus Intel’s Gaudi2 AI chips and the Sappphire Rapids Xeon to arrive.

Nvidia

When asked about the lack of competitiveness and the Mullia’s Salvatore’s Salvatore’s Salvatore, Nvidia told reporters, “” We are doing everything we can to lift Encourage to participate in industrial standards on joining. “

Salvatore said, “It’s our hope when some new solutions continue to come to the market from others that they will want to express the benefits and goodness of those solutions in the industry standards that are contrary Offering the claims of their own implementation difficult to verify. “

Salvatore said MLUTF’s main elements were to be criticized for testing, tests and codes to maintain a clear and consistent test of the transmission hundreds of thousands of companies.

At the same time, the MLPERF training score was released on Wednesday, MlCommons also yielded test results for HPC mean calculations of modern science and computer. These submissions include diverse systems from Nvidia and the Fujitsu’s Fugaku Partners, which runs its own chip.

Also: Neural Magic’s Fashion, NVIDIA’s Hopper, Alibaba’s Network of the first network in the latest network in the latest in MLPERF AI Benchmarks

The third competition called Tinyl measures that the chips with low energy and implied in the process of performing the conclusion, which is part of learning the turtric nervous.

The competition so far NVIDIA has not been involved in the chips and sending of sellers, Silicon Labs and the Octrich, syntiant, syntiant, syntiant, and greenwaves Technologies.

In Tinyml’s test, the test recognition of the CIFAR data and the Resnet Greenwaves, the headquarters in Grenoble, France took the top score for the slowest time in the data process and came Prediction. . The Company has sent its GAP9 AI accelerator along with the RISC process.

In the remarked remarks, Gap9 “provides an unusual low-end energy consumption on the average complex nerve network such as the Mobilenet Series in both the classification and discovery but also nervous networks Which recurring recurrence is as diverse as our LSTM based LSTM “.

Source

Also Read :  Online chess cheating is a huge problem — here’s how scientists are trying to spot it

Leave a Reply

Your email address will not be published.