Businessfeatured

SK hynix at the 2024 OCP Global Summit: Leading the Future of AI & Data Center Memory Solutions

By October 17, 2024 No Comments

SK hynix at the 2024 OCP Global Summit: Leading the Future of AI & Data Center Memory Solutions

SK hynix is showcasing its leading AI and data center memory products at the 2024 Open Compute Project (OCP) Global Summit held October 15–17 in San Jose, California. The annual summit brings together industry leaders to discuss advancements in open source hardware and data center technologies. This year, the event’s theme is “From Ideas to Impact,” which aims to foster the realization of theoretical concepts into real-world technologies.

SK hynix’s booth at the 2024 OCP Global Summit

SK hynix’s booth at the 2024 OCP Global Summit

 

In addition to presenting its advanced memory products at the summit, SK hynix is also strengthening key industry partnerships and sharing its AI memory expertise through insightful presentations. This year, the company is holding eight sessions—up from five in 2023—on topics including HBM1 and CMS2.

1High Bandwidth Memory (HBM): A high-value, high-performance product that revolutionizes data processing speeds by connecting multiple DRAM chips with through-silicon via (TSV).
2Compute Memory Solution (CMS): A memory solution optimized for AI, HPC, and data centers, enhancing speed, efficiency, and scalability for compute-heavy workloads.

Displays & Demonstrations at the Booth: Transforming AI & Data Centers

Visitors to SK hynix’s booth can see a range of the company’s cutting-edge AI and data center solutions and view demonstrations with high-performance customer systems.

SK hynix’s groundbreaking AI and data center products on display

SK hynix’s groundbreaking AI and data center products on display

SK hynix’s groundbreaking AI and data center products on display

SK hynix’s groundbreaking AI and data center products on display

SK hynix’s groundbreaking AI and data center products on display

 

Among the products being demonstrated is CMM-Ax3, formerly known as CMS 2.0, which was shown under its new name for the first time. For the demonstration, SK hynix is highlighting the product’s role in next-generation compute memory for AI infrastructure, particularly for multi-modal applications. Meanwhile, a heterogeneous memory management solution involving CMM (CXL Memory Module)-DDR54 and HMSDK5 is also being demonstrated as well as Niagara 2.06, which is illustrating its efficient utilization of pooled memory resources. Another demonstration is highlighting the computational storage drive’s (CSD) AI storage capabilities for large-scale training systems.

3CXL Memory Module-Ax (CMM-Ax): A high-performance memory module optimized for computational workloads, improving AI and data center efficiency.
4CXL Memory Module-DDR5 (CMM-DDR5): A next-gen DDR5 memory module based on CXL that enhances system bandwidth, speed, and performance for AI, cloud, and high-performance computing.
5Heterogeneous Memory Software Development Kit (HMSDK): A software development kit specially designed to support CXL memory, a next-generation memory system based on the CXL open industry standard.
6Niagara 2.0: An integrated HW/SW solution for pooled memory that allows multiple hosts (CPUs and GPUs) to efficiently share large memory pools to minimize unused or underutilized memory known as stranded memory. By supporting optimal data placement through hot and cold data detection, it can significantly improve system performance.

SK hynix is showcasing CXL-based solutions including CMM-Ax and Niagara 2.0

SK hynix is showcasing CXL-based solutions including CMM-Ax and Niagara 2.0

SK hynix is showcasing CXL-based solutions including CMM-Ax and Niagara 2.0

SK hynix is showcasing CXL-based solutions including CMM-Ax and Niagara 2.0

SK hynix is showcasing CXL-based solutions including CMM-Ax and Niagara 2.0

 

In addition, a live demonstration of the GDDR6-AiM7-based accelerator card AiMX is being held using Meta’s latest large language model (LLM), Llama3 70B, which has 70 billion parameters. The demonstration shows AiMX’s capability to tackle industry challenges. For example, data centers’ LLM services improve GPU efficiency by simultaneously processing requests from multiple users. However, as the length of the generated token increases, the computation of the attention layer8 increases, lowering GPU efficiency. Through the demonstration, AiMX shows it can overcome this issue by handling large amounts of data while offering greater efficiency and lower power consumption compared to the latest accelerators.

7Accelerator-in-Memory (AiM): SK hynix’s PIM semiconductor product. PIM is next-generation technology that adds computational functions to memory semiconductors, solving the data transfer bottleneck in AI and big data processing fields.
8Attention layer: A mechanism which allows a model to determine the importance of input data to focus on more relevant information.

The AiMX card is being demonstrated with Meta’s latest LLM, Llama3 70B

The AiMX card is being demonstrated with Meta’s latest LLM, Llama3 70B

The AiMX card is being demonstrated with Meta’s latest LLM, Llama3 70B

 

The company also displayed a range of its industry-leading AI memory and data center products. HBM3E is shown along with the NVIDIA H200 Tensor Core GPU and NVIDIA GB200 Grace Blackwell Superchip. The booth also features SK hynix’s DDR5 RDIMM and MCR DIMM server DRAM, as well as enterprise SSDs (eSSDs), with Supermicro servers. The DDR5 products on display include the world’s first DDR5 DRAM built using the 1c node, the sixth generation of the 10nm process technology. Designed to meet the growing computational and energy demands of AI-driven data centers, the 16 Gb 1c DDR5 offers improved performance and energy efficiency from the previous generation. Additionally, SK hynix unveiled the 1bnm 96 GB DDR5 which can reach speeds of up to 7,200 megabits per second (Mbps), making it ideal for supporting massive data flows in data centers.

HBM3E is presented alongside NVIDIA's H200 and GB200, while DDR5 RDIMM and MCR DIMM products are shown with Supermicro servers

HBM3E is presented alongside NVIDIA's H200 and GB200, while DDR5 RDIMM and MCR DIMM products are shown with Supermicro servers

HBM3E is presented alongside NVIDIA’s H200 and GB200, while DDR5 RDIMM and MCR DIMM products are shown with Supermicro servers

 

In the SSD section, the Gen5 eSSDs PS1010 and PS1030 and the Gen4 PE9010 are among those on display. Offering ultra-fast read/write speeds, these SSD solutions are vital for accelerating AI training and inference in large-scale environments. Through all these innovations, SK hynix is continuing to lead the way in AI memory and storage solutions, fueling the future of AI and transforming data center operations.

Gen5 eSSDs showcased at the summit

Gen5 eSSDs showcased at the summit

Gen5 eSSDs showcased at the summit

 

Expanded Presentation Sessions: Sharing AI Memory Expertise

Reflecting SK hynix’s growing influence as the leading AI memory provider, the company is holding eight insightful sessions—three more than in 2023—on the future of AI memory solutions.

SK hynix is sharing its industry expertise through eight presentations

SK hynix is sharing its industry expertise through eight presentations

SK hynix is sharing its industry expertise through eight presentations

SK hynix is sharing its industry expertise through eight presentations

SK hynix is sharing its industry expertise through eight presentations

SK hynix is sharing its industry expertise through eight presentations

SK hynix is sharing its industry expertise through eight presentations

SK hynix is sharing its industry expertise through eight presentations

SK hynix is sharing its industry expertise through eight presentations

SK hynix is sharing its industry expertise through eight presentations

 

  • Youngpyo Joo, head of Software Solution: Joo discussed how AI-driven technological advancements are reshaping computing architecture, focusing on innovations such as CXL®9-based memory and computational memory solutions for AI workloads.
  • Technical Leader Jungmin Choi, Composable System: In his talk, Choi explored how memory disaggregation can improve system efficiency.
  • Technical Leader Honggyu Kim, System SW: Kim explained how SK hynix’s HMSDK solution optimizes memory pooling and sharing in AI workloads, improving performance and reducing network congestion.
  • Technical Leader Younsoo Kim, DRAM Technology Planning: Kim addressed the future of HBM memory and its pivotal role in AI applications in his presentation.
  • Euicheol Lim, head of Solution Advanced Technology (Solution AT): Lim discussed the role of SK hynix’s AiMX card to reduce operational costs in LLM processing in data centers and on-device services.
  • Technical Leader Kevin Tang and Team Leader Jongryool Kim, AI System Infra: Tang and Kim delivered a joint presentation about improving LLM training efficiency and scalability through checkpoint10
  • Hoshik Kim, head of SOLAB: As part of a panel discussion, Kim discussed various issues, solutions, and visions that near-data computing11 needs to address for application to various real systems.
  • Technical Leader Myoungseo Kim, AI Open Innovation: Kim will deliver the final talk with Vikrant Soman, Solution Architect at Uber, and Jackrabbit Labs CTO Grant Mackey on CMS composable memory architecture and the need for scalable, interoperable memory solutions. The presentation will reveal the results of SK hynix’s CXL pooled memory prototype integration with Jackrabbit Labs’ open source cluster orchestration software to address the pain point of stranded memory in the Uber Data Center Kubernetes Cluster environment.

9Compute Express Link (CXL): A PCIe-based next-generation interconnect protocol on which high-performance computing systems are based.
10Checkpoint: A technology that stores model parameters and related key data at a specific point during the learning (training) process, enabling the learning process to restart from the saved point in case of a system failure.
11Near-data computing: A computing method aimed at addressing the bottleneck issue between the memory and processor, a limitation of the von Neumann architecture. It delivers only refined data processed by memory to the processor to minimize the movement of data within the system, improving performance and cost efficiency.

Shaping the Future: Advancing AI Memory & Data Center Innovation

At the 2024 OCP Global Summit, SK hynix is reinforcing its industry leadership by not only showcasing its advanced AI memory and data center solutions but also sharing its expertise. Looking ahead, the company is dedicated to pioneering breakthroughs that will define the next generation of AI and data center technologies.