
Google developers are teaching artificial intelligence to explain jokes, something that far from how banal it sounds can enhance a profound technological advance in the way these systems manage to automatically learn to analyze and respond to human language.
The goal is to push the frontiers of Natural Language Processing (NLP) technology, which is used for large language models (LLMs) such as GPT-30 that allow, for example, chatbots to reproduce increasingly accurate human communication, which, in the most advanced cases, makes difficult to distinguish whether the interlocutor is a human being or a machine.
Now, in a recently published article, Google's research team claims to have trained a language model called PalM that is capable not only of generating realistic text, but also of interpreting and explaining jokes told by humans.
In the examples that accompany the document, Google's artificial intelligence team demonstrates the model's ability to perform logical reasoning and other complex language tasks that are highly context-dependent, for example, by using a technique called thought chain indication, which greatly improves the system's ability to analyze logical problems in several steps by simulating the thought process of a human being.
By “explaining the jokes” the system shows that it understands the joke, and you can find the plot trick, the play on words, or the sarcastic exit in the punchline of the joke, as can be seen in this example.
Joke: What is the difference between a zebra and an umbrella? One is a striped animal related to horses, another is a device that you use to prevent rain from falling on you.
AI Explanation: This joke is an anti-joke. The joke is that the answer is obvious, and the joke is that you expected a funny answer.
Behind PalM's ability to analyze these indications is one of the largest language models ever built, with 540 billion parameters. Parameters are the elements of the model that are trained during the learning process each time the system receives sample data. PalM's predecessor, GPT-3, has 175 billion parameters.
The increasing number of parameters has allowed researchers to produce a wide range of high-quality results without the need to spend time training the model for individual scenarios. In other words, the performance of a language model is often measured by the number of parameters it supports, with larger models capable of what is known as “learning from few attempts”, or the ability of a system to learn a wide variety of complex tasks with relatively few examples of training.
KEEP READING
Últimas Noticias
Debanhi Escobar: they secured the motel where she was found lifeless in a cistern
Members of the Specialized Prosecutor's Office in Nuevo León secured the Nueva Castilla Motel as part of the investigations into the case

The oldest person in the world died at the age of 119
Kane Tanaka lived in Japan. She was born six months earlier than George Orwell, the same year that the Wright brothers first flew, and Marie Curie became the first woman to win a Nobel Prize

Macabre find in CDMX: they left a body bagged and tied in a taxi
The body was left in the back seats of the car. It was covered with black bags and tied with industrial tape
The eagles of America will face Manchester City in a duel of legends. Here are the details
The top Mexican football champion will play a match with Pep Guardiola's squad in the Lone Star Cup

Why is it good to bring dogs out to know the world when they are puppies
A so-called protection against the spread of diseases threatens the integral development of dogs
