Intel Just Launched Its All-New 48-Core Monstrous CPU

Intel Just Launched Its All-New 48-Core Monstrous CPU

A month after unveiling the first ninth-generation Core processors, the well-known giant chip manufacturer, of course, I am talking about Intel has recently announced its new server chips with which it has also launched its all-new 48-core monstrous CPU.

Intel Just Launched Its All-New 48-Core Monstrous CPU
A month after unveiling the first ninth-generation Core processors, the well-known giant chip manufacturer, of course, I am talking about Intel has recently announced its new server chips: the Xeon Cascade Lake Advanced Performance (or Xeon Cascade Lake AP), as it was named, can count on up to 48 cores in a single unit.

Yes, it’s an impressive feat. As currently, the company’s most powerful server chips are the Scalable Processor (also known as Xeon SP), which has up to 28 cores and 56 threads. In the new line, the manufacturing technology continues to be 14 nanometers – Intel chips with 10 nanometers should only appear by the end of 2019.

But what did the well-known giant chip manufacturer, of course, Intel do to significantly increase the number of cores? The “Cascade” in the name gives a clue: instead of deploying multiple cores in a single chip, the well-known giant chip manufacturer, of course, Intel created an array of chips and put it all together into a single “package.” It is as if the Xeon Cascade Lake AP was made up of several chips. This is a building architecture called Multi-Chip Package (MCP).

It’s not an unprecedented approach, it’s worth pointing out. The rival AMD already adopts a similar technique: Epyc processors are composed of four arrays, each with eight cores, which makes the whole chip to hold 32 cores.

Interestingly, at the time of the release of Epyc processors, the well-known giant Chip manufacturer, of course, Intel commented that AMD’s multi-cores approach could lead to performance inconsistencies or implementation difficulties in data centers.

In fact, the approach in question points to a decrease in the risk of inconsistencies. The higher an array, the more likely there are defects due to the significant increase in the number of transistors. The use of smaller matrices together tends to reduce the appearance of problems. This is probably one of the factors that led Intel to bet on the MCP.

As the performance remains an important factor, the giant chip manufacturer, Intel claims that Xeon Cascade Lake AP processors can be up to 20% faster than the Xeon SP chips. In comparison to AMD Epyc processors, performance is up to 3.4 times higher, still according to Intel.

IMG Source: Newsroom.intel.comThe new chips are prepared to handle large volumes of data, as it could not be otherwise, which is why they can work with up to 12 channels of DDR4 memory. However, it was not clear if Hyper-Threading is supported, which would make the number of threads reach 96 per chip. It is expected that the first Xeon Cascade Lake chips will hit the market in the first half of 2019 when we should then have more details about them.

Another recent innovation from Intel for the segment is the Xeon E-2100. It is far more modest than the Cascade Lake chips, yet it brings enough firepower to servers that handle non-demanding applications: the model has six cores and 12 threads, and supports ECC memories. Unlike the Cascade Lake line, the Intel Xeon E-2100 is now available to manufacturers. So, what do you think about this? Simply share all your views and thoughts in the comment section below.

No comments

Theme images by 5ugarless. Powered by Blogger.