Businessfeatured

SK hynix Presents Upgraded AiMX Solution at AI Hardware & Edge AI Summit 2024

By September 13, 2024 No Comments

A glimpse of SK hynix’s booth at the AI Hardware & Edge AI Summit 2024

A glimpse of SK hynix’s booth at the AI Hardware & Edge AI Summit 2024

 

SK hynix unveiled an enhanced Accelerator-in-Memory based Accelerator (AiMX) card at the AI Hardware & Edge AI Summit 2024 held September 9–12 in San Jose, California. Organized annually by Kisaco Research, the summit brings together representatives from the AI and machine learning ecosystem to share industry breakthroughs and developments. This year’s event focused on exploring cost and energy efficiency across the entire technology stack.

Marking its fourth appearance at the summit, SK hynix highlighted how its AiM1 products can boost AI performance across data centers and edge devices2.

1Accelerator in Memory (AiM): SK hynix’s PIM semiconductor product name, which includes GDDR6-AiM.
2Edge device: Hardware that controls the flow of data at the boundary between two networks. While they fulfill numerous roles, edge devices essentially serve as the entry or exit point to a network.

Attendees gather to learn more about the upgraded AimX card

Attendees gather to learn more about the upgraded AimX card

 

Booth Highlights: Meet the Upgraded AiMX

In the AI era, high-performance memory products are vital for the smooth operation of LLMs3. However, as these LLMs are trained on increasingly larger datasets and continue to expand, there is a growing need for more efficient solutions. SK hynix addresses this demand with its PIM4 product AiMX, an AI accelerator card that combines multiple GDDR6-AiMs to provide high bandwidth and outstanding energy efficiency.

3Large language model (LLM): Advanced AI systems that require extensive datasets to train models to understand and generate human-like language. It enables applications like natural language processing and translation.
4Processing-In-Memory (PIM): A next-generation technology that embeds processing capabilities within memory, minimizing data transfer between the processor and memory. This boosts efficiency and speed, especially for data-intensive tasks like LLMs, where quick data access and processing are essential.

The 32 GB AiMX prototype card was shown publicly for the first time at the event

The 32 GB AiMX prototype card was shown publicly for the first time at the event

 

At the AI Hardware & Edge AI Summit 2024, SK hynix presented its updated 32 GB AiMX prototype which offers double the capacity of the original card featured at last year’s event. To highlight the new AiMX’s advanced processing capabilities in a multi-batch5 environment, SK hynix held a demonstration of the prototype card with the Llama 36 70B model, an open source LLM. In particular, the demonstration underlined AiMX’s ability to serve as a highly effective attention7 accelerator in data centers.

5Multi-batch: A computer processing method in which the system groups together multiple tasks (batches) and processes them at once.
6Llama 3: An open source LLM developed by Meta, featuring pretrained and instruction-fine-tuned language models.
7Attention: Mechanisms which give LLMs context about text, lessening the model’s chance of misunderstandings and allowing it to generate more accurate and contextually relevant outputs.

The upgraded AiMX was demonstrated with the Llama 3 70B model LLM to highlight its processing capabilities

The upgraded AiMX was demonstrated with the Llama 3 70B model LLM to highlight its processing capabilities

The upgraded AiMX was demonstrated with the Llama 3 70B model LLM to highlight its processing capabilities

 

AiMX addresses the cost, performance, and power consumption challenges associated with LLMs in not only data centers, but also in edge devices and on-device AI applications. For example, when applied to mobile on-device AI applications, AiMX improves LLM speed three-fold compared to mobile DRAM while maintaining the same power consumption.

Featured Presentation: Accelerating LLM Services from Data Centers to Edge Devices​

Euicheol Lim presenting on how the AiMX system accelerates LLM services

Euicheol Lim presenting on how the AiMX system accelerates LLM services

Euicheol Lim presenting on how the AiMX system accelerates LLM services

Euicheol Lim presenting on how the AiMX system accelerates LLM services

 

On the final day of the summit, SK hynix gave a presentation detailing how AiMX is an optimal solution for accelerating LLM services in data centers and edge devices. Euicheol Lim, research fellow and head of the Solution Advanced Technology team, shared the company’s plans to develop AiM products for on-device AI based on mobile DRAM and revealed the future vision for AiM. In closing, Lim emphasized the importance of close collaboration with companies involved in developing and managing data centers and edge systems to further advance AiMX products.

Looking Ahead: SK hynix’s Vision for AiMX in the AI Era

The AI Hardware & Edge AI Summit 2024 provided a platform for SK hynix to demonstrate AiMX’s applications in LLMs across data centers and edge devices. As a low-power, high-speed memory solution able to handle large amounts of data, AiMX is set to play a key role in the advancement of LLMs and AI applications.