Dark Posted October 22, 2020 Posted October 22, 2020 The graphics card market for enthusiasts is now dominated by a duopoly made up of AMD and NVIDIA, but Intel does not want to be left behind and is keen to enter that market in full force. Their bet is the development of the Intel Xe architecture, which so far we have only seen in the Intel Lake Field based on DG1, but they have plans to enter the market for dedicated graphics cards with their DG2 architecture. But what do the latest rumors say about it? Intel is not synonymous with graphics power, its last two attempts to enter as a serious competitor ended in a failure first with the Intel i740 that appeared in the late 90s and a failure later with the Intel Larrabee, which evolved into the Xeon Phi to fall completely into oblivion. But it is said that the third time is the charm and Intel has a huge strategic interest and especially as a defense against AMD in having competitive graphics hardware. Intel DG2, the architecture of the Xe-HPG Xe-HPG Intel has not yet made an official presentation of its DG2 architecture, which will be focused on the enthusiastic gamer market and its objective would be to enter one-on-one in a battle against the RX 6000 from AMD and the RTX 3000 from NVIDIA, said GPU would have elements in common with its rivals such as Ray Tracing hardware acceleration and Variable Rate Shading. Apart from that we know from the driver leaks that the entire Intel Xe HP line, based on DG2, could have integrated neural processors in the style of NVIDIA's Tensor Cores. Tensors Intel Xe Otherwise it would be an evolution of the Intel DG1 that we have already seen in the Intel Lakefield but on a larger scale due to the fact of having a much wider GPU by having a greater number of processing units. New rumors about the Intel Xe HPG contradict previous ones Intel Xe Render Recently new rumors have appeared about the GPU with which Intel wants to stand up to AMD and NVIDIA in the high-end graphics cards. The first one talks about the manufacturing process used, which would be TSMC's N6, so Intel would not be the manufacturer of its own GPU. The 6 nm process of TSMC is an EUV process that is thought of as an evolution of the 7 nm process and which results in an evolution of it. This means that designs for N7 can be carried over to N6 and take advantage of an additional 20% in terms of the number of transistors per mm2. The second rumor tells us that the Intel Xe HPG graphics card would carry about 16 GB GDDR6. The surprise would come with the third rumor, where contrary to what was initially rumored we would not be facing a GPU with an MCM configuration, composed of several chiplets or tiles as Intel calls them, but we would find a monolithic chip. The decision to use a monolithic chip would have caused a cut in the configuration in terms of the number of EUs planned for the Intel Xe HPG from 960 to 512, this is 4096 "Stream Processors" which are equivalent to 64 Compute Units, so without taking into account the differences in efficiency of each architecture would be an equivalent to the Navi 21 XL GPU from AMD, so it would compete against the RX 6800 from AMD and the RTX 3070 from NVIDIA, all with an energy consumption between the 150 and 200 W. As for the price at which these cards would be launched, we do not know until Intel pronounces and confirms it, but it is supposed to be between $ 400 and $ 500. Why would Intel have canceled the design with multiple tiles or chiplets? Intel Xe HP Not long ago, Raja Koduri showed us the prototypes of the Intel Xe GPUs with 1, 2 and 4 tiles or chiplets and rumors began to speculate that they could be the chips with which Intel would compete against NVIDIA and AMD. There are two reasons why with respect to making a dedicated graphics card for the enthusiastic gamer market, Intel would have opted for a monolithic design, that is: a GPU made up of a single chip. The first of them is that dividing a single GPU into several chiplets means that the bandwidths communicated by the different parts have to be immense, the problem with the external interfaces of the processors is that they consume a lot and it is necessary to pull exotic solutions like the use of interposers and TSV cabling that make the product extremely expensive. The second possibility involves having two complete graphics chips or more in the same interposer, each of them is independent but the domestic software only uses one GPU and ignores the others, placing a double and quadruple GPU would be a waste of time that no game would take advantage of , so it is better for Intel to make a monolithic chip. But there is a reason why Intel had decided to go for a Tiles / Chiplets configuration, which is none other than its own 10nm process has poor performance from chips I learned 3
Recommended Posts