As an Amazon Associate I earn from qualifying purchases from

CES 2023: AMD Intuition MI300 Information Middle APU Silicon In Hand

Alongside AMD’s broadly anticipated shopper product bulletins this night for desktop CPUs, cell CPUs, and cell GPUs, AMD’s CEO Dr. Lisa Su additionally had a shock up her sleeve for the big crowd gathered for her prime CES keynote: a sneak peak at MI300, AMD’s next-generation information middle APU that’s at present below growth. With silicon actually in hand, the fast teaser laid out the essential specs of the half, together with reiterating AMD’s intentions of taking management within the HPC market.

First unveiled by AMD throughout their 2022 Monetary Analyst Day again in June of 2022, MI300 is AMD’s first shot at building a true data center/HPC-class APU, combining the most effective of AMD’s CPU and GPU applied sciences. As was laid out on the time, MI300 can be a disaggregated design, utilizing a number of chiplets constructed on TSMC’s 5nm course of, and utilizing 3D die stacking to position them over a base die, all of which in flip will likely be paired with on-package HBM reminiscence to maximise AMD’s out there reminiscence bandwidth.

AMD for its half is not any stranger to combining the talents of its CPUs and GPUs – one solely wants to have a look at their laptop computer CPUs/APUs – however so far they’ve by no means executed so on a big scale. AMD’s present best-in-class HPC {hardware} is to mix the discrete AMD Intuition MI250X (a GPU-only product) with AMD’s EPYC CPUs, which is precisely what’s been executed for the Frontier supercomputer and different HPC initiatives. MI300, in flip, is the following step within the course of, bringing the 2 processor sorts collectively on to a single package deal, and never simply wiring them up in an MCM trend, however going the total chiplet route with TSV stacked dies to allow extraordinarily excessive bandwidth connections between the varied components.

The important thing level of tonight’s reveal was to indicate off the MI300 silicon, which has reached preliminary manufacturing and is now in AMD’s labs for bring-up. AMD had beforehand promised a 2023 launch for the MI300, and having the silicon again from the fabs and assembled is a powerful signal that AMD is on observe to make that supply date.

Up Shut With MI300 (Picture Courtesy Tom’s Hardware)

Together with an opportunity to see the titanic chip in individual (or at the least, over a video stream), the temporary teaser from Dr. Su additionally provided a couple of new tantalizing particulars in regards to the {hardware}. At 146 billion transistors, MI300 is the largest and most complicated chip AMD has ever constructed – and simply so. Although we will solely evaluate it to present chip designs, that is considerably extra transistors than both Intel’s 100B transistor Xeon Max GPU (Ponte Vecchio), or NVIDIA’s 80B transistor GH100 GPU. Although in equity to each, AMD is stuffing each a GPU and a CPU into this half.

The CPU aspect of the MI300 has been confirmed to make use of 24 of AMD’s Zen 4 CPU cores, lastly giving us a fundamental concept of what to anticipate with reference to CPU throughput. In the meantime the GPU aspect is (nonetheless) utilizing an undisclosed variety of CDNA 3 structure CUs. All of this, in flip, is paired with 128GB of HBM3 reminiscence.

In response to AMD, MI300 is comprised of 9 5nm chiplets, sitting on high of 4 6nm chiplets. The 5nm chiplets are undoubtedly the compute logic chipets – i.e. the CPU and GPU chiplets – although a exact breakdown of what’s what will not be out there. An affordable guess at this level can be 3 CPU chiplets (8 Zen 4 cores every) paired with probably 6 GPU chiplets; although there’s nonetheless some cache chiplets unaccounted for. In the meantime, taking AMD’s “on high of” assertion actually, the 6nm chiplets would then be the bottom dies all of this sits on high of. Primarily based on AMD’s renders, it seems like there’s 8 HBM3 reminiscence stacks in play, which means round 5TB/second of reminiscence bandwidth, if no more.

With reference to efficiency expectations, AMD isn’t saying something new presently. Earlier claims have been for a >5x enchancment in AI performance-per-watt versus the MI250X, and an general >8x enchancment in AI coaching efficiency, and that is nonetheless what AMD is claiming as of CES.

The important thing benefit of AMD’s design, moreover the operational simplicity of placing CPU cores and GPU cores on the identical design, is that it’s going to enable each processor sorts to share a high-speed, low-latency unified reminiscence house. This may make it quick and straightforward to cross information between the CPU and GPU cores, letting every deal with the points of computing that they do greatest. As effectively, it could considerably simplify HPC programming at a socket degree by giving each processor sorts direct entry to the identical reminiscence pool – not only a unified digital reminiscence house with copies to cover the bodily variations, however a very shared and bodily unified reminiscence house.

AMD FAD 2022 Slide

When it launches within the later half of 2023, AMD’s MI300 is anticipated to be going up towards a couple of competing merchandise. Essentially the most notable of which is probably going NVIDIA’s Grace Hopper superchip, which mixes an NVIDIA Armv9 Grace CPU with a Hopper GPU. NVIDIA has not gone for fairly the identical degree of integration as AMD is, which arguably makes MI300 a extra formidable undertaking, although NVIDIA’s choice to take care of a cut up reminiscence pool will not be with out advantage (e.g. capability). In the meantime, AMD’[s schedule would have them coming in well ahead of arch rival Intel’s Falcon Shores XPU, which isn’t due until 2024.

Expect to hear a great deal more from AMD about Instinct MI300 in the coming months, as the company will be eager to show off their most ambitious processor to date.

We will be happy to hear your thoughts

Leave a reply
Enable registration in settings - general
Compare items
  • Total (0)
Shopping cart