Comments on: Competition Finally Comes To Datacenter GPU Compute https://www.nextplatform.com/2018/11/07/competition-finally-comes-to-datacenter-gpu-compute/ In-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Thu, 15 Nov 2018 18:05:00 +0000 hourly 1 https://wordpress.org/?v=6.7.1 By: David https://www.nextplatform.com/2018/11/07/competition-finally-comes-to-datacenter-gpu-compute/#comment-108383 Wed, 14 Nov 2018 10:34:44 +0000 http://www.nextplatform.com/?p=38696#comment-108383 In reply to BaronMatrix.

“12nm is not twice 7nm…”

Traditionally, you would have been wrong, as a 10nm part should have 2x the density of 14nm(10/14 squared equals ~0.5).

But with all foundries turning such numbers into marketing, it doesn’t mean much anymore.

According to the information above, 14nm doesn’t offer 2x density over 7nm either.
14nm = 12.5 billion transistors, 500mm2
7nm = 13.2 billion transistors, 331mm2

Traditionally, 7nm should have 4x the density of 14nm.

]]>
By: nnunn https://www.nextplatform.com/2018/11/07/competition-finally-comes-to-datacenter-gpu-compute/#comment-108244 Sun, 11 Nov 2018 22:52:08 +0000 http://www.nextplatform.com/?p=38696#comment-108244 Then again, some of us (HPC physics) are concerned only with fp64 + bandwidth, so AMD have our attention. Price-wars? Good times?

]]>
By: Matt https://www.nextplatform.com/2018/11/07/competition-finally-comes-to-datacenter-gpu-compute/#comment-108177 Sat, 10 Nov 2018 16:36:32 +0000 http://www.nextplatform.com/?p=38696#comment-108177 In reply to sam lebon.

We must consider how many transistors the 3 NVLink controllers use.

But when the AMD chip has so much less utility do to an inferior toolchain, inferior application optimization, and lack of Tensor Cores the greater cost to manufacture of NVIDIA’s chip is not that big of a deal.

]]>
By: sam lebon https://www.nextplatform.com/2018/11/07/competition-finally-comes-to-datacenter-gpu-compute/#comment-108084 Thu, 08 Nov 2018 20:33:10 +0000 http://www.nextplatform.com/?p=38696#comment-108084 In reply to David.

No. You should compare by die area and the number of transistors used. 13 billion vs 21 billion, and 331 mm2 vs 815 mm2. Do the math. Not to mention price.

]]>
By: BaronMatrix https://www.nextplatform.com/2018/11/07/competition-finally-comes-to-datacenter-gpu-compute/#comment-108077 Thu, 08 Nov 2018 18:02:59 +0000 http://www.nextplatform.com/?p=38696#comment-108077 In reply to David.

Volta is so big it’s dangerous… It is at the optical limit for 12nm (815mm) so even a full shrink would still have it at around 500mm (12nm is not twice 7nm…

]]>
By: Matt https://www.nextplatform.com/2018/11/07/competition-finally-comes-to-datacenter-gpu-compute/#comment-108050 Thu, 08 Nov 2018 03:09:39 +0000 http://www.nextplatform.com/?p=38696#comment-108050 In reply to David.

It’s a much smaller chip than the GV100. However, they are probably also dependent on the thermal advantage of 7 nm to compete with the GV100, so there probably is an architectural issue there. But it doesn’t match NVIDIA because it doesn’t have anything to match Tensor Cores.

This chip isn’t really practical competition, it is only the first step in an attempt to enter the competition, assuming AMD continues to make the investment into hardware and software in the space. The MI60 probably won’t have good availability for months, and then it’s going to have to be validated. By the time it could show up for widespread usage NVIDIA’s next generation architecture on 7 nm will be knocking on the door. Then there is also the big gap in software optimization and support to consider.

]]>
By: Muhammad https://www.nextplatform.com/2018/11/07/competition-finally-comes-to-datacenter-gpu-compute/#comment-108035 Wed, 07 Nov 2018 22:47:15 +0000 http://www.nextplatform.com/?p=38696#comment-108035 The ResNet-50 numbers from AMD are bogus, they avoided FP16 precision and thus Volta didn’t get to use it’s power Tensor Cores. That would have made it much faster than Vega 20.

Also AMD used the PCI-E V100, this is the slowest V100 available, There is V100 NVlink, which is at least 10% faster than PCI-E V100.

]]>
By: David https://www.nextplatform.com/2018/11/07/competition-finally-comes-to-datacenter-gpu-compute/#comment-108028 Wed, 07 Nov 2018 20:02:15 +0000 http://www.nextplatform.com/?p=38696#comment-108028 @Author,

Using 7nm to match nVidia, does AMD have architecture problems?

]]>