Dedicated Processing: The New Wave of Acceleration

As big data grows exponentially, its ability to handle complex workloads is waning. Speedata co-founder and CEO Jonathan Friedmann shares the most common workloads run by CPUs and the hardware required to accelerate them.

circa 250 Bytes of data are created every day estimate Big data suggests it will continue to grow at 23% annually. this trend Penetration Almost every part of the economy has adopted big data analytics to improve business intelligence, drive growth and streamline efficiencies, from airlines, banks and insurance companies to government agencies, hospitals and telecommunications companies.

As big data only grows, the tools used to analyze all of it must be expanded. However, the computer chips currently used to handle large or complex workloads are not up to the task and require too many chips, which outweighs the benefits and hinders computing efficiency.

So despite all the benefits, the explosion of data presents challenges to high-tech industries. The key to overcoming this problem is to beef up your processing power from all angles.

To this end, a wave of specialized, domain-specific accelerators have been developed to offload workloads from the CPU, the traditional workhorse of computer chips. These “alternative” accelerators are designed for specific tasks, and trade off general-purpose capabilities with the flexibility of standard CPU computing in exchange for better acceleration performance for these specified tasks.

Here is a brief guide to some of the key acceleration areas and their accelerators.

Hardware for AI and ML workloads

Artificial intelligence is changing the way we compute — the way we live. However, the first AI analytics had to run on CPU chips that were far better suited to single-threaded tasks and not designed for the parallel multitasking AI requires.

Also Read :  Bhagwandin’s computing model seriously overestimates oil production OPEX and CAPEX

Enter: graphics processing unit (GPU).

GPUs originated in the gaming industry to accelerate graphics workloads. A single GPU can support parallel programs with simple control flow by combining multiple specialized cores running in parallel. It is suitable for graphics workloads involving millions of pixels of images that need to be computed independently and in parallel, i.e. computer games. Handling these pixels also requires vectorized floating-point multiplication that GPUs are designed to handle very well.

The discovery that GPUs can also be used to handle AI workloads has opened up new horizons for how AI data is managed. Although applications are very different from graphics workloads, AI/ML (machine learning) workloads have in many respects similar computing requirements that require efficient floating-point matrix multiplication. As AI and ML workloads have proliferated over the past decade, GPUs have undergone significant improvements to keep pace with soaring demand.

Later, companies developed dedicated application-specific integrated circuits (ASICs) to address this critical workload in an attempt to usher in the second wave of AI acceleration. ASICs at the forefront of AI acceleration include TPUs, Google’s Tensor Processing Units used primarily for inference. IPU, the intelligence processing unit of Graphcore; RDU, SambaNova’s reconfigurable dataflow unit.

data processing workload

A data processing unit (DPU) is essentially a network interface controller (NIC), the hardware that connects a given device to a digital network. These ASICs are explicitly designed to offload protocol networking functions from the CPU and higher layer processing such as encryption or storage-related tasks.

Also Read :  After months of waiting, Army finally unveils its updated cloud, data plans

Companies have developed various DPUs, such as Mellanox acquired by Nvidia and Persando acquired by AMD. Although the architecture is different and the exact networking protocols each offload are different, all DPU variants have the same end goal of increasing data processing speed and offloading network protocols from the CPU.

Intel’s DPU is given the abbreviation IPU (Infrastructure Processing Unit) but belongs to the DPU family. IPUs are designed to improve data center efficiency by offloading functions traditionally run on CPUs, such as networking control, storage management and security.

big data analytics

Database and analytical data processing is where big data truly provides actionable insights. Like the workloads above, CPUs have long been considered standard. However, as data analytics workloads continued to grow in size, the efficiency of these CPU functions decreased exponentially.

Big data analytics workloads have many unique characteristics, including data structures and formats, data encoding and processing operator types, as well as intermediate storage, IO, and memory requirements. This allows dedicated ASIC accelerators that aim to optimize workloads with these specific characteristics, providing significant acceleration at a lower cost than traditional CPUs. Despite this potential, no chip has emerged in the last decade as a natural successor to CPUs for analytical workloads. As a result, until now, dedicated accelerators have not fully served big data analytics.

Also Read :  NTT Research Names Takashi Goto Head of the Technology Promotion Team

Analytical workloads are typically programmed in Structured Query Language (SQL), but other high-level languages ​​are also very common. Analytics engines to handle these workloads are plentiful and include open source engines such as Spark and Presto, and managed services such as Databricks, Redshift, and Big Query.

Speedata created an analytical processing unit (APU) to accelerate analytical workloads. As data explodes, the insights derived from these new tools have the potential to create incredible value across all industries.

View more: How Chatbots Simplify Data Analysis Consumption by Decision Makers

respect the process

There is no “one size fits all” solution to today’s computing needs.

Instead, once ubiquitous CPUs are evolving into “system controllers” that deliver complex workloads (data analytics, AI/ML, graphics, video processing, etc.) to specialized devices and accelerators.

In turn, companies are tailoring their data centers with processing units strategically tailored to their workload requirements. This increased level of customization will not only improve the effectiveness and efficiency of data centers, but also minimize costs, reduce energy consumption and reduce real estate demand.

For analytics, faster processing will open up new opportunities as more insights can be gained from larger amounts of data. With more processing options and new opportunities, the big data era is just beginning.

How do you think dedicated processing can simplify the process of handling complex data workloads? share Facebook, Twitterand LinkedIn. We want to know!

Image source: Shutterstock

Learn more about data analysis



Source

Leave a Reply

Your email address will not be published.