Jump to content
Facebook Twitter Youtube

[Software] Nvidia bans using translation layers for CUDA software — previously the prohibition was only listed in the online EULA, now included in installed files [Updated]


Recommended Posts

Posted

AMD

[Edit 3/4/24 11:30am PT: Clarified article to reflect that this clause is available on the online listing of Nvidia's EULA, but has not been in the EULA text file included in the downloaded software. The warning text was added to 11.6 and newer versions of the installed CUDA documentation.]

Nvidia has banned running CUDA-based software on other hardware platforms using translation layers in its licensing terms listed online since 2021, but the warning previously wasn't included in the documentation placed on a host system during the installation process. This language has been added to the EULA that's included when installing CUDA 11.6 and newer versions.

The restriction appears to be designed to prevent initiatives like ZLUDA, which both Intel and AMD have recently participated, and, perhaps more critically, some Chinese GPU makers from utilizing CUDA code with translation layers. We've pinged Nvidia for comment and will update you with additional details or clarifications when we get a response.

Longhorn, a software engineer, noticed the terms. "You may not reverse engineer, decompile or disassemble any portion of the output generated using SDK elements for the purpose of translating such output artifacts to target a non-NVIDIA platform.," a clause in the installed EULA text file reads.

The clause was absent in the EULA documentation that's installed with the CUDA 11.4 and 11.5 release, and presumably with all versions before that. However, it is present in the installed documentation with version 11.6 and newer.

Being a leader has a good side and a bad side. On the one hand, everyone depends on you; on the other hand, everyone wants to stand on your shoulders. The latter is apparently what has happened with CUDA. Because the combination of CUDA and Nvidia hardware has proven to be incredibly efficient, tons of programs rely on it. However, as more competitive hardware enters the market, more users are inclined to run their CUDA programs on competing platforms. There are two ways to do it: recompile the code (available to developers of the respective programs) or use a translation layer.

For obvious reasons, using a translation layer like ZLUDA is the easiest way to run a CUDA program on non-Nvidia hardware. All one has to do is take already-compiled binaries and run them using ZLUDA or other translation layers. ZLUDA appears to be floundering now, with both AMD and Intel having passed on the opportunity to develop it further, but that doesn't mean translation isn't viable.

Several Chinese GPU makers, including one funded by the Chinese government, claim to run CUDA code. Denglin Technology designs processors featuring a "computing architecture compatible with programming models like CUDA/OpenCL." Given that reverse engineering of an Nvidia GPU is hard (unless one already somehow has all the low-level details about Nvidia GPU architectures), we are probably dealing with some sort of translation layer here, too.

One of the largest Chinese GPU makers, Moore Threads, also has a MUSIFY translation tool designed to allow CUDA code to work with its GPUs. However, whether or not MUSIFY falls under the classification of a complete translation layer remains to be seen (some of the aspects of MUSIFY could involve porting code). As such, it isn't entirely clear if the Nvidia ban on translation layers is a direct response to these initiatives or a pre-emptive strike against future developments.

For obvious reasons, using translation layers threatens Nvidia's hegemony in the accelerated computing space, particularly with AI applications. This is probably the impetus behind Nvidia's decision to ban running their CUDA applications on other hardware platforms using translation layers.

Recompiling existing CUDA programs remains perfectly legal. To simplify this, both AMD and Intel have tools to port CUDA programs to their ROCm (1) and OpenAPI platforms, respectively.

As AMD, Intel, Tenstorrent, and other companies develop better hardware, more software developers will be inclined to design for these platforms, and Nvidia's CUDA dominance could ease over time. Furthermore, programs specifically developed and compiled for particular processors will inevitably work better than software run via translation layers, which means better competitive positioning for AMD, Intel, Tenstorrent, and others against Nvidia — if they can get software developers on board. GPGPU remains an important and highly competitive arena, and we'll be keeping an eye on how the situation progresses in the future.

0 seconds of 1 minute, 31 secondsVolume 0%

https://www.tomshardware.com/pc-components/gpus/nvidia-bans-using-translation-layers-for-cuda-software-to-run-on-other-chips-new-restriction-apparently-targets-zluda-and-some-chinese-gpu-makers

 

  • I love it 1
Guest
This topic is now closed to further replies.

WHO WE ARE?

CsBlackDevil Community [www.csblackdevil.com], a virtual world from May 1, 2012, which continues to grow in the gaming world. CSBD has over 70k members in continuous expansion, coming from different parts of the world.

 

 

Important Links