As an Amazon Associate I earn from qualifying purchases from amazon.com

As much as 56 Cores and 112 PCIe 5.0 Lanes


For the entire singular focus that Intel has positioned on its shopper Core desktop CPU elements in the previous few years, you may be forgiven for considering that Intel has forgotten about their Xeon premium processor lineups for workstations. Between the de facto retirement of Intel’s desktop-grade Xeon W-1×00-series lineup, and the repeated delays of Intel’s current-generation massive silicon elements for servers, the Sapphire Rapids-based 4th Era Xeon Scalable collection, there hasn’t been a lot noise from Intel within the workstation house in the previous few years. However now that Sapphire Rapids for servers has lastly launched, the logjam in Intel’s product roadmap has finally cleared out, and Intel is lastly able to renew cascading their newest silicon into new workstation elements.

This morning Intel is saying their first top-to-bottom refresh of workstation elements, the Xeon W-3400 and Xeon W-2400 collection. Geared toward what Intel is broadly classifying because the Knowledgeable Workstation and Mainstream Workstation markets respectively, these chip lineups are meant to be used in high-performance desktop workstation setups, significantly people who require extra CPU cores, extra PCIe lanes, extra reminiscence bandwidth, or a mixture of all three components. Based mostly on the identical Sapphire Rapids silicon as Intel’s recently-launched server elements, the brand new Xeon W SKUs will convey down many (however not all) of the options which have come to outline Intel’s modern server silicon, together with a brand new chipset (W790) and motherboards which can be extra appropriate to be used in high-performance workstations.

As with the brand new Xeon Scalable Components, the large three additions listed below are the shift to Intel’s Golden Cove CPU structure – with all of the IPC and clockspeed advantages that brings – together with the addition of assist for DDR5 reminiscence and PCIe 5 for I/O connectivity. All of which is a big improve over the combination of Cascade Lake and Ice Lake elements that make up Intel’s earlier product stack. In the meantime in comparison with Intel’s current desktop processor lineup, these are all options that have been pioneered on Alder Lake (12th Gen Core) again in late 2021, the workstation-focused Xeon W elements are going to be constructing issues out to a a lot bigger diploma.

Beginning on the prime, the Xeon W-3400 collection (Sapphire Rapids-112L) will range from 12 to 56 cores, and all will embrace 112 PCIe 5.0 lanes, assist for as much as 4 TB of DDR5-4800 reminiscence throughout eight reminiscence channels, ECC reminiscence (RDIMM-only), Intel vPro, and Intel Customary Manageability (ISM). 4 of the seven W-3400 SKUs (X-series) profit from unlocked multipliers and, as such, formally assist overclocking. In the meantime a step down from, the Xeon W-2400 collection (Sapphire Rapids-64L), will provide between 6 and 24 CPU cores paired with a pared-down 64 lanes of PCIe 5.0 connectivity, assist for as much as 2 TB of DDR5-4800 reminiscence throughout 4 reminiscence channels, and the entire remainder of the Xeon W trimmings comparable to ECC reminiscence.








Intel Xeon Workstation Desktop Platforms
AnandTech 2021 2022 2023
Knowledgeable Workstation Xeon W 3300 (Ice Lake-64L)

& Xeon W-3200 (Cascade Lake-64L)
Xeon W-3400

(Sapphire Rapids-112L)
Mainstream Workstation Xeon W-2200 (Cascade Lake) Xeon W-2400

(Sapphire Rapids-64L)
Entry Workstation Xeon W-1200

(Rocket Lake-S)
twelfth Gen Core

(Alder Lake-S)

+ W680 Chipset
thirteenth Gen Core

(Raptor Lake-S)

+ W680 Chipset

The brand new Xeon W elements will probably be changing a mish-mash of various Xeon generations from Intel. Whereas Intel did launch some Ice Lake-based Xeon elements in 2021 – the Xeon W-3300 family – these elements have been a supplemental replace of kinds for Intel’s Xeon lineup, for particular prospects that wanted the additional CPU cores or PCIe bandwidth. For everybody else, the outgoing Xeon W product stack, the circa 2019 W-3200 and W-2200 households, have been primarily based on Intel’s Cascade Lake silicon – which itself was a modest replace to Intel’s Skylake elements. So the significance of the launch of the Xeon W-3400/2400 collection to Intel’s workstation lineup is tough to overstate: it is a main overhaul and improve of Intel’s Xeon workstation stack.

The brand new Xeon W elements, in flip, will probably be going up towards AMD’s Threadripper Pro 5000 WX parts, that are primarily based on AMD’s Zen 3 structure. The newest Threadripper Professional elements launched final spring, and AMD has primarily had the run of the market by way of CPU efficiency since then, due to a big benefit in core counts and IPC. Even with their new elements, Intel technically nonetheless isn’t fully closing that core depend hole, however the increase in IPC, core counts, and clockspeeds ought to assist to degree the taking part in subject by way of total CPU efficiency – although by how a lot stays to be seen.

Intel Xeon W-3400 Sequence: ‘Knowledgeable’ Platform with As much as 56 Cores, 112 PCIe 5.0 lanes, and 8-Channel Reminiscence

Intel’s Xeon W-3400 and W-2400 collection workstation processors are primarily based on Intel’s Golden Cove CPU structure, the identical structure as Intel’s Alder Lake (twelfth Gen) desktop processors. Representing the premier line-up from Intel’s 4th Gen Xeon Scalable Sapphire Rapids premium workstation choices, the W-3400 household has seven SKUs in whole. The Xeon W-3400 ranges from a modest 12-core/24-thread half (w5-3425) to a extremely anticipated 56-core/112-thread half, the flagship w9-3495X.














Intel Xeon W-3400 Sequence (Sapphire Rapids-112L)
 SKU    Cores/

 Threads 
Base Freq

(GHz)
Turbo Freq

(TB 2.0)
Turbo Freq

(TBM 3.0)
PCI Lanes

(Gen5)
L3 Cache

(MB)
 Unlocked

(Perf Tuning) 
TDP

(W)
Worth (1KU)
w9-3495X 56/112 1.9 4.6 4.8 112 105 Y 350 $5889
w9-3475X 36/72 2.2 4.6 4.8 112 82.5 Y 300 $3739
 
w7-3465X 28/56 2.5 4.6 4.8 112 75 Y 300 $2889
w7-3455 24/48 2.5 4.6 4.8 112 67.5 N 270 $2489
w7-3445 20/40 2.6 4.6 4.8 112 52.5 N 270 $1989
 
w5-3435X 16/32 3.1 4.5 4.7 112 45 Y 270 $1589
w5-3425 12/24 3.2 4.4 4.6 112 30 N 270 $1189

For the Xeon W-3400 collection specifically, these elements are primarily based on Intel’s Sapphire Rapids Excessive Core Depend (XCC) silicon, which is at present utilized in Intel’s higher-end Xeon server elements. The XCC silicon depends on 4 compute tiles, sure collectively utilizing Intel’s newest EMIB interconnect – a primary for a Xeon workstation processor.

The person tiles for a Sapphire Rapids XCC chip are all an identical/symmetrical, so every tile offers 1 / 4 of the CPU cores, I/O, and reminiscence channels of the whole chip. As such, every tile can present as much as a most of 32 PCIe 5.0 lanes (112 whole on the w9-3495X), whereas every tile additionally consists of as much as two reminiscence controllers offering eight-channel reminiscence throughout the W-3400 collection.

Specializing in the top-end SKU of the Xeon W-3400 household, the Intel Xeon w9-3495X, it has comparable vibes to Intel’s earlier behemoth Xeon W-3175X, which was launched in 2019 and got here with official assist for overclocking. Just like the Skylake-based Xeon W-3175X, the newest Xeon w9-3495X additionally has an unlocked multiplier for overclocking.

The Intel Xeon w9-3495X has 56 cores (for 112 threads), and in contrast to Intel’s desktop elements, each final one in every of these is a Efficiency (P) core. Additionally current is a complete of 105 MB of Intel’s Sensible L3 Cache, with official assist for eight-channels of DDR5-4800 ECC RDIMM reminiscence, with a most capability of as much as 4 TB.

Just like the server half it is primarily based on, w9-3495X has a moderately toasty TDP score, coming in at 350 Watts. And in apply, peak energy consumption is more likely to be a lot increased underneath full load with Intel’s Turbo Enhance and Turbo Enhance Max 3.0 applied sciences enabled, particularly on 56-unlocked cores. Though it has a base frequency on the 56 Golden Cove cores of 1.9 GHz, it has a turbo frequency of as much as 4.6 GHz, and because of Turbo Enhance Max 3.0 (Intel’s favored core expertise), a handful of cores can increase additional to 4.8 GHz.

The opposite SKUs from the Xeon W-3400 household vary from 36-cores right down to 12-core choices, such because the w9-3475X (36C/72T) and the w5-3425 (12C/24T).  Finally, the entire Xeon W-3400 elements provide the identical variety of DDR5 reminiscence channels and PCIe lanes, so what separates the completely different SKUs is CPU core counts, max reminiscence clockspeeds, L3 cache, and naturally, worth.

In the meantime, as beforehand famous, 4 of the Xeon W-3400 SKUs – the w9-3495X, w9-3475X, w7-3465X, and the w5-3435X – are all “unlocked” processors. That is one thing Intel hasn’t supplied on a Xeon W half in a number of years and comes with some attention-grabbing ramifications. Apart from probably the most fundamental potential to change the clockspeed multipliers for the CPU, unlocked processors also can have their AVX and AMX offsets adjusted to maintain the processors from dropping fairly as a lot underneath heavy SIMD hundreds. Lastly, all of those elements additionally provide some tuning choices for his or her mesh interconnects, although Intel hasn’t stated what exactly will be tweaked right here.

Costs on the Intel Xeon W-3400 household begin at $1189, with Intel offering pricing on a 1K per unit pricing (tray) and never individually bought retail SKUs. The Xeon w9-3495X has a 1KPU worth of $5889, which makes the highest SKU and every subsequent W-3400 SKU costlier than the previous generation of Xeon W-3300 chips, however they do include increased core counts, sooner turbo frequencies, extra L3 cache, and assist for DDR5-4800. 

It’s price declaring that every one of Intel’s W-3400 SKUs characteristic assist for as much as 4TB of eight-channel DDR5-4800 ECC reminiscence, even the underside SKU, the w5-3425 (12C/24T). So there are alternatives within the Xeon product stack for methods that want an entire lot of DRAM, however not essentially a ton of CPU cores. Do observe, nonetheless, that truly hitting 4TB requires utilizing 2 DIMMs per channel (DPC), which requires backing off to DDR5-4400 reminiscence speeds.

With 112 PCIe 5 lanes obtainable from the CPU (and but extra from the chipset), the Xeon W-3400 chips can assist a moderately large variety of I/O units. This works out to seven discrete x16 graphics playing cards, or as much as 28 x4 high-speed storage units. This, together with core counts and reminiscence channels, is likely one of the major differentiators from the lower-tier Xeon W-2400 collection – and ought to be a welcome improvement for Intel platform customers who have been caught with a fraction of the I/O bandwidth on Intel’s earlier Xeon W elements.

Curiously, 112 PCIe 5 lanes is definitely greater than Intel provides in its Sapphire Rapids server elements. The Xeon Scalable lineup tops out at simply 80 lanes.  This discrepancy comes from the truth that Intel solely enabled 5 of the 7 root ports for his or her server elements, leaving an extra 2 ports (32 lanes) unused. Nevertheless because the workstation Sapphire Rapids elements don’t must allocate any pins to supporting Intel’s multi-socket UPI hyperlinks, it could appear that Intel has as a substitute allotted these pins to carrying the extra PCIe lanes for the workstation elements. It is price noting that Intel is utilizing the identical socket for each server and workstation chips right here – LGA 4677 – however with the pin adjustments I would not count on them to be appropriate.

In the meantime, in one other first for Intel, the corporate has stated that they will assist DDR5 XMP 3.0 reminiscence overclocking profiles for RDIMMs. The small print on this announcement are very scant, however at a excessive degree it’s going to give unlocked processor house owners working on W790 the choice of making an attempt to squeeze extra out of their reminiscence if they will. Typically talking, reminiscence overclocking and the rock-solid stability of RDIMMs are diametrically opposing targets, so it is going to be attention-grabbing to see how this performs out out there. The DRAM could find a way clock increased than simply DDR5-4800, however can the registered clock drivers (RCDs)?

As an apart, all of this speak explicitly round RDIMMs is intentional: in an enormous change from earlier Xeon W platforms, the Sapphire Rapids Xeon workstation platforms won’t assist UDIMMs. This can be a limitation of the DDR5 specification, which calls for various voltages for UDIMMs and RDIMMs respectively. Whereas UDIMMs take 5 volts, RDIMMs take 12 volts, rendering them incompatible. For those who’ve ever had the prospect to see an DDR5 RDIMM in particular person, you might have observed that they’re even keyed otherwise from UDIMMs, so they’re each bodily and electrically incompatible.

Finally, because of this customers will need to pair these processors and W790 motherboards with costlier, albeit higher-quality ECC-enabled DDR5 RDIMMs. For dyed within the wool workstation customers that is unlikely to be a difficulty (or perhaps a distinction that will get observed), however anybody hoping to construct an HEDT-style system or low-end workstation on a budget goes to seek out that the ultimate price ticket for a Xeon W system goes to be increased than what you may pull off with the W-3200/2200 collection.

Accelerated Computing: AMX Makes the Lower, Most Area-Particular Accelerators Do Not

For his or her Sapphire Rapids Xeon silicon and ensuing server elements, Intel introduced a slew of different acceleration blocks and other accelerator-related features. Between matrix extensions (AMX), varied area particular {hardware} acceleration blocks, and assist for Compute eXpress Hyperlink (CXL) for exterior accelerators, Intel ended up devoting a good bit of silicon to non-CPU duties. This has meant that for his or her Xeon Scalable server elements specifically, Intel has opted (if not wanted) to lean on these accelerator options, with one DSA engine enabled in the entire chips. Nonetheless, QAT, DLB, and IAA should not supported. This in lieu of simply uncooked x86 CPU efficiency for differentiating the {hardware} from its predecessors and its competitors.

However for his or her workstation elements, issues are somewhat extra easy, for higher and for worse. Briefly, not all of Intel’s accelerated computing options are being made obtainable within the Xeon W-3400/2400 households. So let’s do a fast rundown of which of Sapphire Rapids extra esoteric options made the reduce for Xeon W.

Maybe most critically of all, Intel’s Superior Matrix Extensions (AMX) did make the reduce, and assist for them is totally current and enabled on the Xeon W-3400/2400 household. AMX is Intel’s matrix math execution block, and much like tensor cores and different kinds of matrix accelerators, these are ultra-high-density blocks for effectively executing matrix math. AMX isn’t a devoted accelerator, moderately it’s part of the CPU cores, with every core getting a block, which permits AMX code to be combined with x86 (and AVX) code, and can also be why Sapphire Rapids has destructive clockspeed offsets for utilizing the ultra-dense code.

AMX is Intel’s play for the deep studying market, going above and past the throughput they will obtain right now with AVX-512 by utilizing even denser knowledge constructions. Whereas Intel has AMX-enabled GPUs (Intel Information Heart Max GPU Sequence) that transcend even this, for Sapphire Rapids Intel is seeking to handle the shopper phase that wants AI inference going down very near CPU cores, moderately than in a much less versatile, extra devoted accelerator. The brand new AMX models additionally assist Bfloat16, making certain that each tier of Intel’s accelerated computing blocks (AVX and AMX) assist this widespread mid-precision floating level format for deep studying.

Considered one of Sapphire Rapids’ new domain-specific {hardware} accelerator blocks, the Information Streaming Accelerator (DSA), additionally made the reduce. This block is for offloading/accelerating sure operations, comparable to knowledge copies and easy computations comparable to calculating CRC32s. The DSA block is on the market throughout the entire Xeon W SKUs.

Nevertheless you will not discover point out of the remainder of Intel’s accelerator blocks, comparable to Intel Dynamic Load Balancer (DLB), Intel In-Reminiscence Analytics Accelerator (IAA), and Intel QuickAssist Expertise (QAT). This even if these accelerators are all a part of the identical purposeful block on the Sapphire Rapids silicon. These different accelerator blocks are all primarily geared toward servers, so it isn’t shocking to not see their inclusion, however it does imply anybody prototyping code for servers might want to take a look at on an precise Xeon Scalable in the event that they’re utilizing their options.

Lastly, CXL assist is absent from Intel’s Xeon W spec sheets, however Intel has confirmed to us that CXL is supported on each households. The built-on-top-of-PCIe customary for host-to-device connectivity has been within the wings for a number of years now, and Sapphire Rapids is the primary Intel CPU platform to assist the expertise. Like a few of these different options, it’s primarily meant for servers, so there’s much less of an impetus to convey it to workstations. Nonetheless, Intel has enabled it for customers seeking to leverage its performance.

Intel Xeon W-2400 Sequence: As much as 24-Cores, 64 PCIe 5.0 lanes, For Mainstream Workstations

Dropping down a tier, we’ve got the Xeon W-2400 collection (Sapphire Rapids-64L), which is designed as a ‘Mainstream’ workstation platform. Xeon W-2400 provides a bit greater than half as many PCIe lanes because the W-3400 SKUs, with 64 PCIe 5.0 lanes obtainable, and the variety of reminiscence channels is reduce in half as properly to 4 channels. As such, this implies costs are decrease on the W-2400 collection than its beefier W-3400 counterparts, going as little as $359 for the entry-level Xeon w3-2423.















Intel Xeon W-2400 Sequence (Sapphire Rapids-64L)
 SKU    Cores/

 Threads 
Base Freq

(GHz)
Turbo Freq

(1T)
Turbo Freq

(TBM 3.0)
PCI Lanes

(Gen5)
L3 Cache

(MB)
 Unlocked

(Perf Tuning) 
TDP

(W)
Worth (1KU)
w7-2495X 24/48 2.5 4.6 4.8 64 45 Y 225 $2189
w7-2475X 20/40 2.6 4.6 4.8 64 37.5 Y 225 $1789
 
w5-2465X 16/32 3.1 4.5 4.7 64 33.75 Y 200 $1389
w5-2455X 12/24 3.2 4.4 4.6 64 30 Y 200 $1039
w5-2445 10/20 3.1 4.4 4.6 64 26.25 N 175 $839
 
w3-2435 8/16 3.1 4.3 4.5 64 22.5 N 165 $669
w3-2425 6/12 3.0 4.2 4.4 64 15 N 130 $529
w3-2423 6/12 2.1 4.0 4.2 64 15 N 120 $359

General, the Xeon W-2400 collection will vary from 6 cores as much as 24 cores. Intel is utilizing their Sapphire Rapids Medium Core Depend (MCC) silicon right here, which not like the XCC silicon, is a standard monolithic die. This implies no fancy EMIB packaging is required to construct the chip – as a substitute, Intel solely has to fab one moderately giant die.

On the top-end of the Xeon W-2400 lineup is the w7-2495X, which options 24-cores/48-threads, 45 MB of Intel Sensible L3 cache, and a TDP of 225 Watts.  Intel additionally has three w5 collection SKUs, and at last the trio of w3 SKUs.

Like its expert-tier counterpart, the Xeon W-2400 collection provides a constant reminiscence and I/O configuration throughout the whole lineup. This implies 64 lanes of PCIe 5 coming from the CPU, and 4 channels of DDR5 reminiscence, permitting for a most of two TB of reminiscence total. It it additionally price declaring that solely the w5 and w7 SKUs provide full DDR-4800 reminiscence speeds; the w3 elements are all capped at DDR4-4400. The silver lining? All SKUs drop to this velocity in a 2 DPC configuration, so when you have been seeking to construct a 2 TB system for no matter purpose, you will not get penalized.

Just like the Xeon W-3400 collection, the W-2400 household additionally has a number of unlocked X SKUs in its arsenal, together with the top-tier w7-2495X. Different SKUs with unlocked multipliers embrace the w7-2475X with 20 cores and 37.5 MB of L3 cache, and two w5 SKUs (w5-2465X 16C/32T and w5-2455X 12C/24T). You will not discover any unlocked w3 elements, nonetheless, as all three entry-level w3 SKUs are totally locked down.

Intel W790 Chipset: Helps each Xeon W-3400 and W-2400 Platforms

All of Intel’s Xeon W-3400 and W-2400 collection SKUs profit from Intel vPro and Intel’s Customary Manageability (ISM) applied sciences. Each the Xeon W-2400 and W-3400 households are supported by the related W790 chipset, though CPU-specific options such because the variety of reminiscence channels and PCIe lanes obtainable depend upon the processor itself.

Among the predominant options of the W790 chipset embrace a Direct Media Interface (DMI) 4.0 x8 hyperlink between the processor and the chipset itself, in addition to as much as 16 PCIe 4.0 lanes and assist for as much as eight SATA 3.0 ports. W790 additionally helps as much as 5 USB 3.2 Gen2x2 (20Gbps) ports, consists of an Intel Wi-Fi 6E PHY, and might assist 2.5 GbE controllers natively.

Though there is no point out of latest motherboards, there are anticipated to be Intel W790 motherboards from distributors comparable to ASUS, GIGABYTE, Supermicro, and ASRock. System integrators comparable to Dell, Lenovo, and Supermicro are anticipated to take priority first in delivering options and methods earlier than DIY builders can get their palms on them.



The ASRock W790 WS motherboard

ASRock emailed us simply earlier than the launch to stipulate its W790 WS mannequin, with a 20+2-phase energy supply, twin 10 GbE controllers, and assist for as much as 2 TB of DDR5-4800 ECC RDIMMs throughout eight slots. Though this board helps each Xeon W-3400 and W-2400 processors, this board is barely enabled for quad-channel reminiscence.

One thing price mentioning regarding the newest era of motherboards is that W790 boards are more likely to price greater than the C621A-based boards that have been used to assist the Xeon W-3300 collection (Ice Lake). It is because W790 boards have 4 extra lanes of DDR5 reminiscence and 48 extra PCIe 5 lanes to account for. Whereas we count on to see completely different ranges of board designs with completely different slots and I/O configurations obtainable in some unspecified time in the future, Intel hasn’t specified if a few of these motherboards will assist each households, or if distributors will design particular boards across the particular person Xeon W-3400 and W-2400 collection.

Intel’s Xeon W-3400 and W-2400 processors can be found to pre-order from business companions, whereas methods deployments are anticipated someday in early March. Intel’s anticipated and really useful pricing begins at $359 for the Xeon w3-2423 and goes as much as $5889 for the Xeon w9-3495X.

We will be happy to hear your thoughts

Leave a reply

Dealsonamerica.com
Logo
Enable registration in settings - general
Compare items
  • Total (0)
Compare
0
Shopping cart