Dean Ambrose™ Posted February 17 Share Posted February 17 A couple of days ago, NVIDIA surprised us all by presenting Chat with RTX, its own chatbot that allows us to create our own local ChatGPT on the PC, as long as you have an NVIDIA RTX 30 or 40 Series graphics card. In this article we are going to tell you how to install it, configure it and we are going to tell you how it works first-hand. That NVIDIA is focusing a large part of its efforts on Artificial Intelligence is a fact known to everyone; Apart from the sale of hardware for AI development, the company is using it for almost everything lately, and in fact it has already had technologies like RTX Voice for a long time. Now, with Chat with RTX, NVIDIA goes a step further and puts it in the hands of users to try it for themselves. What do you need to try NVIDIA Chat with RTX? As we have mentioned before, one of the fundamental requirements for you to be able to make this work on your PC is to be the owner of an NVIDIA RTX generation Ampere (RTX 30) or Ada Lovelace (RTX 40) graphics card. In addition to this, you will need to have the Windows 11 operating system, 16 GB of RAM or more and have the GeForce 535.11 or newer drivers installed on the computer. Once you have verified that your PC meets all the requirements, you just have to go to the NVIDIA website and download the tool completely free of charge. What you download is a ZIP compressed file that, all things considered, clearly tells you that it is a demo, that is, a demonstration, it is not the full version of this tool. Be careful, because the compressed file takes up a whopping 35.1 GB of space. When you have it downloaded, simply unzip it to a directory on your storage device and, when you access the folder, you will see that there is a "Setup.exe" that you must run to install the application. The installation process is very standard, and you will basically just have to click next the entire time until it is finished. Of course, give it time because it takes a LONG time, and in fact there is a "building the Llama 13B INT4 Engine" part that consumes all the RAM on the computer, and everything slows down a lot for about a minute. Be patient. Once the process is complete, the wizard itself will prompt you to launch the application, although it will also have created a shortcut on your desktop for this. When you run it, an MSDOS window like the one below will appear, and you will see a message asking you if you want to allow Python to run on the PC (you must answer yes). When the loading process is finished, a (local) web page will open in your browser where you can configure what we are going to tell you next, so let's get to it. How this local AI works on your PC Chat with RTX uses recovery augmented generation (RAG), NVIDIA TensorRT-LMM software, and NVIDIA RTX acceleration to bring generative AI to your PC. Thus, users can quickly and easily connect local PC files as a data set to an open source language model such as Mistral or Llama 2, allowing queries to be obtained quickly. But instead of searching through notes or content saved on the PC hard drive, with NVIDIA's Chat with RTX we can simply write questions. For example, you can ask things like "What was the restaurant that my friend Rubén recommended to me the other day?" and the AI will scan the local files that we indicate to look for the answer. This implies that we must tell the application where to look for these files (.txt, .pdf, .doc/.docx and .xml), right in that web browser that opens that we mentioned before. If you change the path where the application should search for the files, you will see that in the MSDOS window that remains open a log begins to appear of the things that are being processed at all times (and be careful, it takes a long time to process any change, just Changing the path can take 2-3 minutes of processing (with an RTX 4080 OC) and, in that time, the GPU starts working at 100%, greatly increasing consumption and temperature. The fact is that it is in this local web interface where you can interact with the AI, in the text box at the bottom, just to the left of the green Send button. Of course, at the moment you can only ask your questions in English. From what we have been testing, this NVIDIA chatbot is currently too basic, archaic and slow. It's not slow in answering your questions, it does it almost instantly, but it consumes a lot of system resources and any change you want to make takes a long time to generate. Additionally, once you have opened the application and told it where to look for the resources, if you make any changes to that folder it does not process it automatically, you have to either restart Chat with RTX or select the folder again. The conclusion we draw after trying it is that... it is not worth it, at least for now. You need to have a lot of system resources, and you need to organize all the information in a single directory on the PC so that the chatbot can work with it, something that not everyone has beforehand and that will mean that, just to be able to try this tool, you will have to You have to spend a lot of time on your own just to have it answer questions that you already know. Now, if you are a user who works with data "at a glance" and you have it well organized and defined, then Chat with RTX could be a powerful tool for you, because just by "chatting" with the bot it could give you the answers you need without you having to search through your files. https://hardzone.es/noticias/tarjetas-graficas/nvidia-chat-with-rtx-guia/ Link to comment Share on other sites More sharing options...
Recommended Posts