Nvidia CEO Jensen Huang in his usual leather jacket.
Getty
NVIDIA on Tuesday announced new software that helps software makers prevent AI models from misrepresenting facts, talking about harmful topics and opening security holes.
Called NeMo Guardrails, the software is an example of the artificial intelligence industry’s struggle to use the latest generation of large-scale language models to address the problem of “hallucinations,” which is a major roadblock for companies. .
Large language models like GPT microsoft– Supports OpenAI and LaMDA Google, has been trained on terabytes of data to create a program that can spit out blocks of text that read like they were written by a human. However, they also have a tendency to make things up, which is often referred to by practitioners as ‘hallucinations’. hallucinations should be minimized.
Nvidia’s new software can do this by adding guardrails to prevent software from addressing topics it shouldn’t be addressing. NeMo Guardrails can force LLM chatbots to talk about specific topics, avoid toxic content, and prevent LLM systems from executing harmful commands on your computer.
Jonathan Cohen, Nvidia’s vice president of applied research, said: “You don’t have to trust the language model to follow prompts and instructions. What actually happens is hard-coded into the guardrail system’s execution logic.”
The announcement also highlights Nvidia’s strategy to maintain its market lead in AI chips by simultaneously developing critical software for machine learning.
Nvidia provides thousands of graphics processors needed to train and deploy software like ChatGPT. Analysts say Nvidia owns more than 95% of the AI chip market, but competition is fierce.
Usage
NeMo Guardrail is a layer of software that sits between you and a large language model or other AI tool. Avoid bad outcomes and bad prompts before the model spits them out.
Nvidia suggested a customer service chatbot as one possible use case. Developers can use Nvidia’s software to keep them from talking about off-topic topics or “digressing.”
“If you have a customer service chatbot designed to talk about your product, you don’t want them answering questions about your competitors,” says Nvidia’s Cohen. And when that happens, we’ll bring the conversation back to your preferred topic.”
Nvidia provided another example of a chatbot that answers internal HR questions. In this example, Nvidia was able to add a “guardrail” so the ChatGPT-based bot would not answer questions about the example company’s financial performance or access personal data about other employees.
The software can also detect hallucinations by using LLMs to ask another LLM to fact check the first LLM’s answers. If the model can’t find a matching answer, it returns “I don’t know”.
Nvidia also said Monday that its guardrail software helps with security and allows LLM models to interact only with third-party software on its allow list.
NeMo Guardrails is open source and available through Nvidia services for use in commercial applications. According to Nvidia, programmers use his Colang programming language to write custom his rules for AI models.
Other AI companies such as Google and OpenAI use a method called reinforcement learning from human feedback to prevent harmful output from LLM applications. In this method, a human tester creates data about which answers are acceptable or not, and uses that data to train her AI model.
Nvidia is turning more and more attention to AI as it now dominates the chips used to develop AI technology. The stock is up 85% as of Monday, riding the AI wave that will be the biggest riser in the S&P 500 so far in 2023.
Correction: Programmers use the Colang programming language to create custom rules for AI models, Nvidia says. In previous versions, the name of the language was written incorrectly.