Technologyfeatured

[Rulebreakers’ Revolutions] Innovative Design Scheme Helps HBM3E Reach New Heights

By September 30, 2024 No Comments

Challenging convention, defying limits, and aiming for the skies, rulebreakers remake the rules in their quest to come up with groundbreaking solutions to problems. Following on from SK hynix’s “Who Are the Rulebreakers?” brand film, this series showcases the company’s various “rulebreaking” innovations that have reshaped technology and redefined new industry standards. This third episode covers the adoption of the 6-phase RDQS design scheme to HBM3E.

 

“Design is not just what it looks and feels like. Design is how it works.” These words of Apple Co-Founder Steve Jobs emphasize the crucial role design plays in the functionality of products. This is particularly true in the semiconductor industry, where design involves defining the chip’s architecture, purpose, and circuit layout to ultimately enable its smooth performance.

The chip design scheme can also play a key role in overcoming challenges. When faced with scaling and data transmission limitations while developing HBM3E1, SK hynix introduced a pioneering 6-phase read-data-strobe (RDQS) design scheme. This world-first application enabled HBM3E to make huge strides in performance from its predecessor while maintaining the same packaging size.

This episode of Rulebreakers’ Revolutions reveals how SK hynix made the groundbreaking leap from the previous 4-phase to the 6-phase RDQS scheme, allowing the company to develop the world’s best-performing HBM3E with enhanced capacity and increased reliability.

1HBM3E: The fifth-generation and latest High Bandwidth Memory (HBM) product. HBM is a high-value, high-performance product that revolutionizes data processing speeds by connecting multiple DRAM chips with through-silicon via (TSV).

[Rulebreakers’ Revolutions] Innovative Design Scheme Helps HBM3E Reach New Heights

Overcoming Scaling & Data Transmission Limitations in HBM3E

While there are challenges when developing any semiconductor product, the development and manufacturing process for HBM solutions comes with its own set of specific issues. For example, there are difficulties when mass-producing HBM due to its use of through-silicon via (TSV) for chip stacking. When developing HBM3E, SK hynix found that TSV presented obstacles to its goal of maintaining the same packaging size as the previous generation HBM3 while increasing capacity.

Originally applied in SK hynix’s first-generation HBM in 2013, TSV involves drilling microscopic holes in a DRAM chip to connect the electrodes that vertically penetrate the holes of the chip’s upper and lower layers. Due to these holes, TSV signals occupy a significant amount of space in peripheral circuits2. As these circuits typically account for 20-30% of the total area in a memory product, the large number of TSV signals in HBM products hinders scaling efforts—resulting in a need for TSV area optimization.

SK hynix also targeted advancing HBM3E’s data transmission characteristics during development to ensure it could meet the heightened demands of the AI era. To meet this goal, the company focused on the CAS-to-CAS delay for reads (tCCDR) operation—the minimum time delay required for memory to read data consecutively from cells in different ranks3. In particular, SK hynix aimed to secure an increased tCCDR margin. This margin allows for deviations in timing to ensure that data can be transmitted accurately, ultimately improving system reliability.

2Peripheral circuit: A logic circuit that is responsible for selecting and controlling the cells that store data.
3Rank: A collection of basic data transmission units sent to the CPU from the DRAM module. A rank typically refers to 64 bytes of data to be transferred to the CPU as a bundle.

For HBM3E, the issue was that it becomes increasingly difficult to secure the minimum margin required for reliable data transmission during high-speed operation. This means that conflicts can occur when reading data across ranks at high speed, leading to potential read failures and a reduction in operational reliability.

Tasked with reducing the peripheral circuit size and improving the tCCDR margin, SK hynix turned its attention to developing a pioneering new design scheme which would open the door to the next-generation HBM3E.

A New Design: Leaping Forward With the World’s First 6-Phase RDQS Scheme

SK hynix overcame scaling and data transmission limitations in HBM3E by introducing the 6-phase RDQS scheme

SK hynix overcame scaling and data transmission limitations in HBM3E by introducing the 6-phase RDQS scheme

 

Although SK hynix implemented several new design schemes and features in HBM3E, the world-first application of the 6-phase RDQS scheme was particularly notable. For HBM3, SK hynix had used the 4-phase RDQS scheme but the company saw an opportunity to push technical boundaries once again for HBM3E. This would ultimately enable the company to expand the memory capacity and improve the reliability of HBM3E.

Before looking at the advancements of the 6-phase RDQS scheme, it is prudent to consider the scheme’s role in HBM. The RDQS scheme is a circuit that produces the RDQS signals required for transmitting data from the HBM’s core dies, which contain the cells, to the base die, which contains the peripheral circuit. Overall, the RDQS scheme aims to minimize data skew4 from different ranks to avoid read failures.

4Data skew: The uneven distribution of data across different partitions in large-scale data processing. This can result in longer processing times as some partitions are required to handle more data than others.

Schematic diagrams showing the structural differences between the 4-phase and 6-phase RDQS schemes (upper diagrams) and a comparison of the schemes’ tCCDR margin (lower diagrams)

Schematic diagrams showing the structural differences between the 4-phase and 6-phase RDQS schemes (upper diagrams) and a comparison of the schemes’ tCCDR margin (lower diagrams) (Source: Jinhyung Lee et al., High-Density Memories and High-Speed Interfaces, ISSCC 2024)

 

So how did the introduction of the 6-phase RDQS scheme reduce the size of the peripheral circuit? In the 4-phase RDQS scheme, multiple sets of FIFO-out data strobes5 (FDQS) and RDQS TSVs are required which inevitably leads to an increase of the peripheral area. The introduction of the 6-phase RDQS scheme can reduce the area of the peripheral circuit by cutting the number of FDQS and RDQS TSVs in half. This reduction of TSV signals means that the number of signals going back and forth between ranks is reduced, which can reduce the peripheral height.

5FIFO: A data structure that holds elements in the order they are received and provides access to those elements using a first-in, first-out basis.

Furthermore, the 6-phase RDQS scheme improved the tCCDR margin during the high-speed operation of HBM3E. This was realized as there is enough space between signals in the new scheme, so there is more margin for data transmission across ranks, or tCCDR operation. By securing this larger margin, the system becomes more tolerant of any deviations in timing, reducing the likelihood of read failures and therefore increasing system reliability.

Power in a Small Package: 6-phase RDQS Scheme Unlocks HBM3E

The application of the 6-phase RDQS scheme enabled HBM3E to maintain the same packaging size as HBM3 while offering improved density

The application of the 6-phase RDQS scheme enabled HBM3E to maintain the same packaging size as HBM3 while offering improved density

 

The application of the 6-phase RDQS scheme contributed to the significant advancements in HBM3E’s key characteristics. First, the scheme enabled the reduction of the peripheral circuit height in the base die by 31%. Crucially, this reduction in the peripheral circuit height helped ensure that HBM3E has the same packaging size as HBM3 while offering an increased capacity from 16 Gb to 24 Gb.

In addition, the increased tCCDR margin stabilized the data transmission characteristics, which contributed to the enhancement in HBM3E’s data processing speed compared to its predecessor. While the 8-layer HBM3 can process up to 819 GB of data per second, the 8-layer HBM3E offers industry-leading data processing speeds of 1.18 terabytes (TB) per second. This rapid processing speed coupled with its vast capacity ensures HBM3E is optimized to meet the requirements of today’s AI applications.

Rulebreaker Interview: Youngjun Ku, Leading HBM Design

Technical Leader (TL) Youngjun Ku of Leading HBM Design

To find out more about the rulebreaking approach which led to the application of the 6-phase RDQS scheme to HBM3E, the SK hynix newsroom interviewed Technical Leader (TL) Youngjun Ku of Leading HBM Design. Ku discussed the significance of the new design scheme, as well as the challenges faced during the application process.

What were the main challenges when applying the 6-phase RDQS scheme to HBM3E?

“The 6-phase RDQS scheme significantly increases the overall design difficulty due to its complexity.

“The most straightforward way to address the issues with HBM3E was to improve transistor performance. However, when it seemed to reach certain limits with no room for further improvement, we resolved this by turning our attention to the design scheme, which was particularly challenging when working with HBM.

“As HBM products have a short gap between generations while realizing huge leaps in performance, the circuits require multiple changes. However, we tackled any problems through collaboration with numerous departments within the DRAM Design division.”

Technical Leader (TL) Youngjun Ku of Leading HBM Design

 

Why is the 6-phase RDQS scheme significant for HBM?

“To meet the demands of the AI era, HBM products need to increase their data processing speed. This requires stable data transmission by securing HBM’s timing margin. Therefore, schemes such as 6-phase RDQS, which help secure data transmission characteristics through TSV between the base and core die, will be essential in the age of AI.

“We believe that collaboration with our customers and the foundry industry will become even more important in the future as HBM products advance with the development of HBM4 and HBM4E, which double the data bandwidth and customized requirements. To maintain our leadership, we need to design products appropriate to our customers’ needs.”

 

How did your team’s rulebreaking approach ensure the development of the 6-phase RDQS scheme?

“When designing HBM products, there are a lot of challenges. The members of Leading HBM Design were constantly brainstorming to find solutions to these problems. Rather than fearing changes to the circuit design, we achieved great results by trusting in our ability to overcome challenges.”

 

<Other articles from this series>

[Rulebreakers’ Revolutions] How MR-MUF’s Heat Control Breakthrough Elevated HBM to New Heights

[Rulebreakers’ Revolutions] How SK hynix Broke Barriers in Mobile DRAM Scaling With World-First HKMG Application