Nvidia Releases AI Chatbot Chat With RTX That Runs Locally on Windows PC

0
14

Nvidia has released an artificial intelligence (AI)-powered chatbot called Chat with RTX that runs locally on a PC and does not need to connect to the Internet. The GPU maker has been at the forefront of the AI industry since the generative AI boom, with its advanced AI chips powering AI products and services. Nvidia also has an AI platform that provides end-to-end solutions for enterprises. The company is now building its own chatbots, and Chat with RTX is its first offering. The Nvidia chatbot is currently a demo app available for free.

Calling it a personalised AI chatbot, Nvidia released the tool on Tuesday (February 13). Users intending to download the software will need a Windows PC or workstation that runs on an RTX 30 or 40-series GPU with a minimum of 8GB VRAM. Once downloaded, the app can be installed with a few clicks and be used right away.

Since it is a local chatbot, Chat with RTX does not have any knowledge of the outside world. However, users can feed it with their own personal data, such as documents, files, and more, and customise it to run queries on them. One such use case can be feeding it large volumes of work-related documents and then asking it to summarise, analyse, or answer a specific question that could take hours to find manually. Similarly, it can be an effective research tool to skim through multiple studies and papers. It supports text, pdf, doc/docx, and xml file formats. Additionally, the AI bot also accepts YouTube video and playlist URLs and using the transcriptions of the videos, it can answer queries or summarise the video. For this functionality, it will require internet access.

As per the demo video, Chat with RTX essentially is a Web server along with a Python instance that does not contain the information of a large language model (LLM) when it is freshly downloaded. Users can pick between Mistral or Llama 2 models to train it, and then use their own data to run queries. The company states that the chatbot leverages open-source projects such as retrieval-augmented generation (RAG), TensorRT-LLM, and RTX acceleration for its functionality.

According to a report by The Verge, the app is approximately 40GB in size and the Python instance can occupy up to 3GB of RAM. One particular issue pointed out by the publication is that the chatbot creates JSON files inside the folders you ask it to index. So, feeding it your entire document folder or a large parent folder might be troublesome.


Affiliate links may be automatically generated – see our ethics statement for details.

LEAVE A REPLY

Please enter your comment!
Please enter your name here