Nvidia released a key update, dramatically increasing AI performance for millions of computers – SMARTmania.cz

--

  • Nvidia optimized TensorRT, dramatically increasing generative AI performance on more than 100 million computers
  • The new TensorRT update gives users more control over AI outputs and increases the speed and accuracy of models
  • Nvidia releases open-source library for optimizing the inference of large language models (LLM), making advanced AI tools available to a wide range of users

Nvidia has taken another major step forward in generative artificial intelligence by optimizing stable video dispersion for its TensorRT software development kit, significantly increasing performance on more than 100 million Windows PCs and workstations equipped with RTX-enabled graphics.

Better user control

Indeed, the latest update to TensorRT’s Stable Diffusion WebUI extension introduces support for ControlNets, increasing user control over AI-generated output by incorporating additional images for guidance. This update not only simplifies the workflow for users, but also increases the speed and accuracy of AI models, with internal tests showing up to a 50% increase in performance on the GeForce RTX 4080 SUPER graphics card compared to previous implementations without TensorRT technology.

Higher performance in other applications as well

The integration of TensorRT technology into popular applications extends its benefits beyond Nvidia’s own technologies. Blackmagic Design’s DaVinci Resolve and Topaz Labs’ Photo AI and Video AI saw significant performance boosts, with RTX-enabled graphics seeing speed increases of over 50% and 100%, respectively. up to 60%. These improvements are a testament to the power of TensorRT in accelerating generative AI models such as Stable Diffusion and SDXL. As a result, this makes advanced AI tools more accessible and effective for a wide range of users.

Nvidia has also released TensorRT-LLM, an open-source library designed to optimize the inference of large language models (LLM). This release offers out-of-the-box support for popular community models and makes it easier for developers, creators, and regular users to take advantage of optimized performance on RTX-enabled graphics cards. Through cooperation with the open-source community, Nvidia wants to streamline the integration of TensorRT-LLM with popular application frameworks.

Is the future of AI local?

Generative AI continues to revolutionize industries, and Nvidia’s moves with TensorRT on RTX PCs and workstations are unlocking some pretty major possibilities for developers and regular users alike. Thanks to the local running of AI applications, the solution from Nvidia can ensure lower latency, lower costs and better data protection, which cannot yet be said about the cloud, although its advantage is, of course, the fact that your local hardware does not matter.

Author of the article

Adam Homola

Adam Homola

New technologies have fascinated me since an early age. Over time, my long-term interest in games and the gaming industry was naturally joined by hardware, software, internet services and, from 2022, artificial intelligence.

The article is in Czech

Tags: Nvidia released key update dramatically increasing performance millions computers SMARTmania .cz

-

NEXT Radeon RX 8900 XTX could have been a card with 50% more Shader Engine count