
When SK hynix initially announced its HBM3 memory portfolio in late 2021, the corporate stated it was growing each 8-Hello 16GB reminiscence stacks in addition to much more technically advanced 12-Hello 24GB reminiscence stacks. Now, nearly 18 months after that preliminary announcement, SK hynix has lastly begun sampling its 24GB HBM3 stacks to a number of prospects, with an goal in direction of going into mass manufacturing and market availability within the second half of the yr. All of which needs to be a really welcome improvement for SK hynix’s downstream prospects, lots of whom are chomping on the bit for extra reminiscence capability to fulfill the wants of huge language fashions and different high-end computing makes use of.
Primarily based on the identical expertise as SK hynix’s current 16GB HBM3 reminiscence modules, the 24GB stacks are designed to additional enhance on the density of the general HBM3 reminiscence module by growing the variety of DRAM layers from 8 to 12 – including 50% extra layers for 50% extra capability. That is one thing that is been within the HBM specification for fairly a while, however it’s confirmed troublesome to tug off because it requires making the extraordinarily skinny DRAM dies in a stack even thinner with a view to squeeze extra in.
Customary HBM DRAM packages are usually 700 – 800 microns excessive (Samsung claims its 8-Hello and 12-Hello HBM2E are 720 microns high), and, ideally, that top must be maintained to ensure that these denser stacks to be bodily suitable with current product designs, and to a lesser extent to keep away from towering over the processors they’re paired with. Because of this, to pack 12 reminiscence gadgets into an ordinary KGSD, reminiscence producers should both shrink the thickness of every DRAM layer with out compromising efficiency or yield, scale back the house between layers, reduce the bottom layer, or introduce a mixture of all three measures.
Whereas SK hynix’s newest press launch affords restricted particulars, the corporate has apparently gone for scaling down the DRAM dies and the house between them with an improved underfill materials. For the DRAM dies themselves, SK hynix has beforehand said that they have been in a position to shave their die thickness right down to 30 microns. In the meantime, the improved underflow materials on their 12-Hello stacks is being supplied by way of as a part of the corporate’s new Mass Reflow Molded Underfill (MR-MUF) packing expertise. This system entails bonding the DRAM dies collectively by way of the reflow course of, whereas concurrently filling the gaps between the dies with the underfill materials.
SK hynix calls their improved underfill materials “liquid Epoxy Molding Compound”, or “liquid EMC”, which replaces the older non conductive movie (NCF) utilized in older generations of HBM. Of explicit curiosity right here, moreover the thinner layers this enables, based on SK hynix liquid EMC affords twice the thermal conductivity of NCF. Preserving the decrease layers of stacked chips fairly cool has been one of many greatest challenges with chip stacking expertise of all varieties, so doubling the thermal conductivity of their fill materials marks a big enchancment for SK hynix. It ought to go a good distance in direction of making 12-Hello stacks extra viable by higher dissipating warmth from the well-buried lowest-level dies.
Meeting apart, the efficiency specs for SK hynix’s 24GB HBM3 stacks are similar to their current 16GB stacks. Meaning a most information switch pace of 6.4Gbps/pin working over a 1024-bit interface, offering a complete bandwidth of 819.2 GB/s per stack.
Finally, all of the meeting difficulties with 12-Hello HBM3 stacks needs to be greater than justified by the advantages that the extra reminiscence capability brings. SK hynix’s main prospects are already using 6+ HBM3 stacks on a single product with a view to ship the whole bandwidth and reminiscence capacities they deem crucial. A 50% enhance in reminiscence capability, in flip, will probably be a big boon to merchandise equivalent to GPUs and different types of AI accelerators, particularly as this period of huge language fashions has seen reminiscence capability develop into bottlenecking think about mannequin coaching. NVIDIA is already pushing the envelope on reminiscence capability with their H100 NVL – a specialised, 96GB H100 SKU that allows previously-reserved reminiscence – so it is easy to see how they’d be keen to have the ability to present 120GB/144GB H100 components utilizing 24GB HBM3 stacks.
Supply: SK Hynix