top of page

Heterogeneous Computing for AI at the Edge

Many industries are pursuing artificial intelligence (AI) with the hope of transforming their business through higher levels of automation and machine learning. There are countless examples, including manufacturers experimenting with AI-enabled machine vision for defect classification and using AI-enabled optical character recognition to extract data from legacy machines. However, AI is still in its infancy and the complexity and diversity of hardware and software solutions can be overwhelming. In order to reach an optimised solution, system architects need to first decide whether to run the bulk of their AI algorithms near the sensors (i.e., at the edge) or in the cloud. This decision will then impact what they choose for hardware solutions with respect to performance, size, weight, and power (SWaP) requirements. To maximise AI performance at the edge, an optimised solution will often employ a heterogeneous computing platform, meaning it has two or more different types of computing cores, such as:


•General-purpose CPU •Field programmable gate array (FPGA) •Graphics processing unit (GPU) •Application-specific integrated circuit (ASIC)


WHY AI AT THE EDGE

The Internet of Things is progressing from simple devices feeding data to the cloud for analysis, to smart devices performing sophisticated inferencing and pattern-matching themselves. Processing AI algorithms locally on a smart device in the field provides many benefits, including:

•Faster response: Minimise delay by eliminating the need to send data to the cloud for AI processing.

•Enhanced security: Decrease the risk of data tampering by sending less data across networks.

•Improved mobility: Reduce reliance on inconsistent wireless networks (i.e. dead zones, service outages) by performing AI functions locally on the mobile system.

•Lower communications cost: Spend less on network services by transmitting less data.


AI DESIGN CHALLENGES

The field of AI is incredibly diverse. System architects are applying AI workload to a wide range of inputs like video, text, voice, images and sensor data, with the goal of improving a system’s decision making. They must choose from a range of decision making processes that implement various deep learning frameworks and neural networks (e.g. recurrent and convolutional) with different numbers of layers. Particular combinations of neural networks and frameworks running on specialised computing cores are ideal for specific tasks such as image processing, character recognition and object classification. Many AI workloads require large amounts of memory, parallel computing, and low-precision computation. The challenge for system architects is to define an optimised AI platform that cost-effectively delivers these computing resources in ways that satisfy their speed and accuracy requirements. For platforms deployed at the edge system architects must address additional requirements, such as environmental hardening and stringent SWaP constraints.


AI DESIGN SOLUTIONS

When designing an AI platform, system architects should consider using a heterogeneous computing architecture containing multiple core types including CPU, GPU, FPGA and ASIC. The goal is to run AI workloads on the best-suited core, resulting in faster computation and less power consumed for a particular function, compared to a homogeneous platform. Although developing a heterogeneous platform will be more complex than a homogeneous platform, ADLINK simplifies the design process by offering heterogeneous platforms that provide a mix of core types. System architects can configure ADLINK platforms according to their AI computing needs, reduce their development effort and benefit from a scalable solution.


CORE COMPARISON TYPES USED IN AI APPLICATIONS


AI APPLICATION EXAMPLES

ADLINK is committed to helping system architects bring AI running on a heterogeneous computing platform to the edge.


bottom of page