Opinion

The prospect of Processing In Memory (PIM) in memory systems for AI applications

By October 19, 2021 December 1st, 2023 No Comments

The growth in AI applications is causing the semiconductor industry to rethink memory architecture. Most computer systems today follow what’s called the Von Neumann architecture. A small set of frequently used data is stored in caches that can be quickly accessed by the CPU, but most large data sets are kept in a separate main memory storage device.

This architecture has served the industry well for years, but AI is putting new demands on systems for faster and higher volumes of data analysis. Under the Von Newmann architecture, bottlenecks are created when the CPU has to frequently access large amounts of data in the separate storage device.

In a recent EE Times column, Dae-han Kwon, project leader of custom design at SK hynix, explains why new memory architectures are being explored for computer DRAM memory. Processing In Memory (PIM) is one architecture that could provide a powerful solution. Data can be processed within the storage device and results delivered to the CPU, creating a more efficient system for handling large amounts of data.

SK hynix already is exploring PIM DRAM as a way to increase the speed of data analysis and lower energy consumption by reducing back-and-forth communications. The results could be a powerful new model for memory architecture that enables more AI applications.

For more on this topic, please read the full column at this link – The prospect of Processing In Memory (PIM) in memory systems for AI applications.


ByDae-han Kwon, Ph.D.

PL (Project Leader) of Custom Design at SK hynix Inc.