Comments on: AMD Gives Nvidia Some Serious Heat In GPU Compute https://www.nextplatform.com/2024/10/10/amd-gives-nvidia-some-serious-heat-in-gpu-compute/ In-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Sat, 26 Oct 2024 00:26:42 +0000 hourly 1 https://wordpress.org/?v=6.7.1 By: tufttugger https://www.nextplatform.com/2024/10/10/amd-gives-nvidia-some-serious-heat-in-gpu-compute/#comment-238420 Sat, 26 Oct 2024 00:26:42 +0000 https://www.nextplatform.com/?p=144810#comment-238420 In reply to Eric Olson.

From a growing share perspective and opportunity, does it matter that ROCm is only most useful for the biggest hyperscale customers and the smaller AI startups? Growing into that market gives time for AMD to expand from both ends to support more of its devices as well as unify the stack across CPU and FPGA. With the Silo acquisition and others, AMD may find ways to use HSA to drive even better performance than just through GPU, allowing them to expand from super compute HPC into more AI.

]]>
By: Jlagreen https://www.nextplatform.com/2024/10/10/amd-gives-nvidia-some-serious-heat-in-gpu-compute/#comment-237296 Fri, 11 Oct 2024 07:00:13 +0000 https://www.nextplatform.com/?p=144810#comment-237296 SO we believe now some manufacturer’s slides comparing their products to competition without doubt?

AMD has shown us similiar slides last year and MLPerf has proven that AMD lied in their presentation back then. Even Nvidia had to comment on them because AMD posted not full potential performance of Nvidia’s products.

In MLPerf the H200 is 40% faster than MI300 so based on these slides, the MI325 should be 2.2x as fast as MI300. That’s like LOL??? We shall wait and see what MLPerf will say about MI325 but I guess we will wait at least another year or might never get any results as AMD is known for showing slides but not really posting much or anything on MLPerf.

Nvidia for gaming and data center only compares their own products upon new releases and shows relative performance improvement. That’s way more believable than what AMD does show here. AMD is obviously desperate here to show how much better MI325 is than H200 because judging by MLPerf, people rather chose H200 than MI300 due to TCO.

But yeah, while on presentation, people don’t believe Nvidia’s slides but are totally sure AMD is right, the sales numbers speak the real truth of the market out there.

]]>
By: Eric Olson https://www.nextplatform.com/2024/10/10/amd-gives-nvidia-some-serious-heat-in-gpu-compute/#comment-237284 Fri, 11 Oct 2024 05:07:11 +0000 https://www.nextplatform.com/?p=144810#comment-237284 In reply to Timothy Prickett Morgan.

NVIDIA supports CUDA across the whole product line from commodity graphics cards, enthusiast gaming all the way to advanced GPU accelerators. As a result the software ecosystem is huge–lots of developers creating and debugging stuff at multiple levels in different ways.

On the other hand ROCm does not officially support any graphics card except the old Radeon VII and a few workstation cards. Generally you need a supercomputer with an Instinct accelerator and three assistants to maintain the development tools in order to write code. In particular, ROCm does not officially run on the commodity hardware needed to support a robust developer community.

Yes, I know that ROCm sort of works on Polaris and better graphics cards. No official support means a whole class of software–game physics, media editing and local AI inference–can not depend on ROCm. Fewer developers means less polish and less quality.

I think NVIDIA also spends a lot more money on CUDA and its libraries than AMD does on ROCm, but that’s a different story.

]]>
By: HuMo https://www.nextplatform.com/2024/10/10/amd-gives-nvidia-some-serious-heat-in-gpu-compute/#comment-237274 Fri, 11 Oct 2024 01:00:25 +0000 https://www.nextplatform.com/?p=144810#comment-237274 Well, I have to second that motion for the 512 GB of HBM3e MI355X, with chiplets laid on their (skinny) side so as to pack more of them per package, like sticks of gum really (no one in their right mind would pack gum sticks flat like a bar of chocolate squares imho, or even chiclets). It would be both logical, innovative, fabulous, tasty, and it would look wonderful as a limited edition LEGO model kit! 8^p ( https://www.servethehome.com/the-4th-gen-amd-epyc-lego-model-you-have-dreamed-of/ )

]]>
By: Timothy Prickett Morgan https://www.nextplatform.com/2024/10/10/amd-gives-nvidia-some-serious-heat-in-gpu-compute/#comment-237266 Thu, 10 Oct 2024 23:24:20 +0000 https://www.nextplatform.com/?p=144810#comment-237266 In reply to Mickey Pearson.

That wasn’t the point of the story, but I will be taking a look into ROCm 6.2, which is a hell of a long way better than the first couple of releases.

]]>
By: Mickey Pearson https://www.nextplatform.com/2024/10/10/amd-gives-nvidia-some-serious-heat-in-gpu-compute/#comment-237264 Thu, 10 Oct 2024 23:16:17 +0000 https://www.nextplatform.com/?p=144810#comment-237264 ” Moreover, AMD is catching up with CUDA with its ROCm stack ”

This looks like a talking point being parroted. Some proof would be nice.

]]>