AI at the edge: 5 trends to watch


Edge AI offers opportunities for multiple applications. See what companies are doing now and in the future to integrate it.

Virtual screen with stock chart and polygonal arrow hologram.
Image: Who is Danny/Adobe Stock

AI at the edge is evolving. AI applications at the edge are diverse: Autonomous vehicles, art, healthcare, personalized advertising and customer service could make use of it. Ideally, the edge architecture offers low latency as it is closer to the requirements.

WATCH: Don’t Control Your Enthusiasm: Trends and Challenges in Edge Computing (TechRepublic)

Astute Analytica forecasts that the edge AI market will grow from $1.4 million in 2021 to $8 million in 2027, at a CAGR of 29.8%. They believe much of that growth will come from AI for the Internet of Things, portable consumer devices, and the need for faster computing on 5G networks, among other things. These bring both opportunities and caveats, as Edge AI’s real-time data is vulnerable to cyberattacks.

Take a look at five trends that will shape the edge AI space in the next year.

Top 5 Edge AI Trends

Disconnect AI from the cloud

One of the game-changing changes today is the ability to run AI processing without a cloud connection. Arm recently released two new chip designs that can push computing power for IoT devices to the edge, skipping either a remote server or the cloud. Its current Cortex-M processor handles object recognition, with other capabilities like gesture or voice recognition coming into play with the addition of ARM’s Ethos-U55. Google’s Coral, a toolkit for building products with local AI, also promises powerful AI processing “offline”.

Also Read :  Mobile Healthcare Device Market Size And Forecast

Machine Learning Operations

NVIDIA predicts that machine learning best practices will prove to be a valuable business process for edge AI. It needs a new lifecycle for IT production – or at least that’s the speculation as MLOps evolves. MLOps could help organize data flow and bring it to the edge. A continuous refresh cycle can prove effective as more organizations discover what works best for them when it comes to edge AI.

Data scientists working on algorithm development, model architecture selection, and day-to-day deployment and monitoring of ML can benefit from simplified ML methods.

That means “it’s possible for neural networks to design neural networks,” said Google CEO Sundar Pichai.

Auto ML requires large amounts of memory and processing power, so deploying it at the edge goes hand-in-hand with other ongoing processing considerations.

Specialized chips

In order to be able to provide more computing power at the edge, companies need custom chips that deliver sufficient performance. Last year, startup DeepVision made headlines with a $35 million Series B funding round for its video analytics and natural language processing chip for Edge.

Also Read :  NICE Alliance Welcomes Total Building Solution Provider S&I Corp. as Adopter for Global Expansion of Advanced AI-Based Facility Management (FM) Service

“We expect to ship 1.9 billion edge devices with deep learning accelerators by 2025,” said Linley Gwennap, principal analyst at the Linley Group.

DeepVision’s AI accelerator chip is coupled with a software suite that essentially turns AI models into computational graphs. IBM released its first accelerator hardware in 2021, designed to fight fraud.

New use cases and capabilities for computer vision

Computer vision remains one of the most important applications for edge AI. NVIDIA’s partner network for its application framework and suite of developer tools now has over 1,000 members.

A key development in this area is multimodal AI, which draws from multiple data sources to go beyond natural language understanding, analyzing poses, and performing inspections and visualizations. This could be useful for AI that seamlessly interacts with humans, such as B. Shopping assistants.

Higher order image processing algorithms can now classify objects using more granular features. Instead of recognizing a car, it can dig deeper to locate the make and model.

It can be difficult to train a model to recognize what granular features are unique to each object. However, approaches such as feature representations with fine-grained information, segmentation to extract specific features, algorithms that normalize an object’s pose, and multi-layer convolutional neural networks are all current ways to make this possible.

Also Read :  Nextbrain Focuses on Reliable Mobile App Trends: A Global Benchmark of App Performance

Enterprise use cases, which are still in their infancy, include quality control, live supply chain tracking, identifying an indoor location from a snapshot, and deepfake detection.

Increased growth of AI on 5G

5G and beyond are almost here. Satellite networks and 6G are waiting for telecom providers on the horizon. For the rest of us, there’s still some time to switch between 4G core networks that work with some 5G services before we fully move into next-gen.

Where does this AI touch? AI on 5G could bring more power and security to AI applications. It could provide some of the benefits that low-latency AI needs and open up new applications such as factory automation, toll and vehicle telemetry, and smart supply chain projects. Mavenir introduced Edge AI with 5G for video analytics in November 2021.

There are more emerging trends in edge AI than we can fit on a list. In particular, its spread could require some changes on the human side as well. NVIDIA predicts that edge AI management will become a task for IT, likely using Kubernetes. Leveraging IT resources instead of the line of business managing edge solutions can optimize costs, Gartner reported.



Source link