All for Joomla All for Webmasters
TECH

How does ChatGPT decide what to say next? Here’s a quick explainer.

  • AI chatbots like ChatGPT are based on large language models that are fed a ton of information.
  • They’re also trained by humans who help the system “learn” what’s an appropriate response. 
  • Here’s how computer science experts explain how the bots know what words to say next.

Also ReadiPhone 15 camera predictions: rumored sensor sizes and zoom ranges

ChatGPT and other chatbots driven by artificial intelligence can speak in fluent, grammatically sound sentences that may even have a natural rhythm to them.

But don’t be lulled into mistaking that well-executed speech for thought, emotion, or even intent, experts say. 

Also Read- Shooting Down Chinese Spy Balloon a Lot Harder Than it Seems

The way a chatbot works is much more like a machine performing mathematical calculations and statistical analysis to call up the right words and sentences according to the context, experts said. There’s a lot of training on the backend — including by human annotators also giving it feedback — that helps simulate functional conversations.

Bots like ChatGPT are also trained on large amounts of conversations that have taught machines how to interact with human users. OpenAI, the company behind ChatGPT, says on its website that its models are instructed on information from a range of sources, including from its users and material it has licensed. 

Also Read- Netflix just showed us a glimpse of how it plans to crack down on password sharing

Here’s how these chatbots work: 

AI chatbots like OpenAI’s ChatGPT are based on large language models, or LLMs, which are programs trained on volumes of text obtained from published writing and information online — generally content produced by humans. 

The systems are trained on series of words, and learn the importance of words in those series, experts said. So all of that imbibed knowledge not only trains large language models on factual information, but it helps them divine patterns of speech and how words are typically used and grouped together.

Also Read- Scientists Have a Genius Plan: Turn Abandoned Mines Into Gravity Batteries

Chatbots are also trained further by humans on how to provide appropriate responses and limit harmful messages. 

“You can say, ‘This is toxic, this is too political, this is opinion,’ and frame it not to generate those things,” said Kristian Hammond, a computer science professor at Northwestern University. Hammond is also the director of the university’s Center for Advancing Safety of Machine Intelligence.

When you ask a chatbot to answer a simple factual question, the recall process can be straightforward: It is deploying a set of algorithms to choose the most likely sentence to respond with. And it selects the best possible responses within milliseconds, and of those top choices, presents one at random. (That’s why asking the same question repeatedly can generate slightly different answers).

It can also break down questions into multiple parts, answering each part in sequence, and using its responses to help it finish answering. 

Say you asked the bot to name a US president who shares the first name of the male lead actor of the movie “Camelot.” The bot might answer first that the actor in question is Richard Harris, and then use that answer to give you Richard Nixon as the answer to your original question, Hammond said. 

“Its own answers earlier on become part of the prompt,” Hammond said. 

Also Read– What is a multi-gig router and do you need one?

But watch out for what chatbots don’t know

What happens when you ask it a question it doesn’t know the answer to? That’s where chatbots create the most trouble because of an inherent trait — they don’t know what they don’t know. So they extrapolate, based on what they do know — that is, they make a guess. 

But they don’t tell you they’re guessing — they may simply present information as fact. When a chatbot invents information that it presents to a user as factual, it’s called a “hallucination.”

“This is what we call knowledge of knowledge or metacognition,” said William Wang, an associate professor teaching computer science at the University of California, Santa Barbara. He’s also a co-director of the university’s natural language processing group. 

Read More:-Direct deposits not showing up in some Wells Fargo customer accounts, here’s why

“The model doesn’t really understand the known unknowns very well,” he said. 

Source :
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

To Top