[We Do Future Technology] Become a Semiconductor Expert with SK hynix – AI Semiconductors

By May 9, 2023 December 5th, 2023 No Comments

AI semiconductors are ultra-fast and low-power chips that efficiently process big data and algorithms that are applied to AI services. In the video above, the exhibition “Today’s Record” shows how humans have been recording information through a variety of ways including drawing and writing for thousands of years. Today, people record information in the form of data at an ever-increasing rate. As this large volume of data is used to create new data, we call this the era of big data.

It is now believed that the total amount of data created up until the early 2000s can be generated in a single day. As ICT and AI technology advances and takes on a bigger role in our lives, the amount of data will only continue to grow exponentially. This is because, in addition to data recording and processing, AI technologies learn from existing data and create large amounts of new data. To process this massive volume of data, memory chips and processors need to constantly operate and work together.

In the Von Neumann architecture1 that is commonly used for most modern computers, the processor and memory communicate through I/O2 pins that are mounted on a motherboard. This creates a bottleneck when transferring data and consumes about 1,000 times more power compared to standard computing operations. Therefore, the role of memory solutions in facilitating fast and efficient data transfer is crucial for the proper function of AI semiconductors and AI services.

1Von Neumann architecture: A computing structure that sequentially processes commands through three stages:  memory, CPU, and I/O device.

2Input/Output (I/O): An information processing system designed to send and receive data from a computer hardware component, device, or network.

Ultimately, AI semiconductors need to combine the functions of a processor and memory while providing more enhanced qualities than the Von Neumann architecture. SAPEON Korea, an AI startup jointly founded by SK hynix, recently developed an AI semiconductor for data centers named after the company. The SAPEON AI processor offers a deep learning computation speed which is 1.5 times faster than that of conventional GPUs and uses 80% less power. In the future, SAPEON will be expanded to other areas like autonomous cars and mobile devices. SK hynix’s commitment to developing technologies to support AI is further highlighted by its establishment of the SK ICT Alliance alongside SK Telecom and SK Square. The alliance invests and develops in diverse ICT areas such as semiconductors and AI to secure global competitiveness. Furthermore, SK hynix is also developing a next-generation CIM3 with neuromorphic semiconductor4 devices.

3Computing-in-memory (CIM): The next generation of intelligent memory that combines the processor and semiconductor memory on a single chip.

4Neuromorphic semiconductor: A semiconductor for computing that can simultaneously compute and store like a human brain, reducing power consumption while increasing computational speed.

As AI technology and services continue to rapidly develop, SK hynix’s semiconductors for AI will also evolve to meet the market and consumer needs. The company’s chips will be the backbone for key AI services in the big data era and beyond.


<Other articles from this series>
[We Do Future Technology] Become a Semiconductor Expert with SK hynix – HBM

[We Do Future Technology] Become a Semiconductor Expert with SK hynix – UFS