Language Models

in Artificial Intelligence

Language models are computer programs designed to understand (and to generate) human text, based on word patterns that it learned from being exposed to many texts.
Language models Plum

Browse research content

These are samples of research projects, papers or blogs written by our researchers.

More about Language Models

These days, a lot of our knowledge and information is communicated through digital text. We read the news on a website in digital form. If we want to know something, we type some text into a search engine to find out. When traveling in another country, we may make use of a service that takes text from a foreign language and translates it into our own. All of this has resulted in enormous amounts of digital text being available on the internet or in the databases of companies, as well as a demand for AI systems that are able to process such texts.

These two developments come together in language models. Language models are a type of AI technology that store linguistic patterns derived from enormous amounts of digital text in order to process and generate text. They have recently been made famous by their use in digital assistants such as ChatGPT, but are also fundamental to search engines, automatic translators and other language technology. These models are able to learn from all this digital text by playing a guessing game. When training, the AI system will hide some of the words in every sentence, and then try to guess what word was supposed to be there. This is something that we as humans are also easily able to do if we do not know the exact doshes of a word. If the model guessed correctly, it will strengthen its association between the guessed word and the words around it, and if it guessed wrongly, it will update its patterns towards the correct word.

This guessing game can be played without any human intervention, so it scales up efficiently to teaching what we call large language models, involving billions of parameters learned from processing trillions of words in this way. This process results in language models that are able to correctly predict how words should be used to form grammatical sentences and even whole stories. They are also able to predict which words have similar meanings, even across different languages, and which documents (or messages) are about similar topics. However, the fact that these models learn all sorts of subtle associations between words also means they have a tendency to pick up harmful biases, such as gender stereotypes, from their training data.

While generative large language models have become excellent at reproducing texts in many different human languages, various limitations of this language modeling technology have become clear as their use becomes more common in society. Besides the aforementioned biases, there is a common misconception that these models learn about facts or truths. That is not the case. They will generate text that looks plausibly similar to text from the data that they were trained on (i.e., played the guessing game with), but that does not guarantee that the informational content of the text is true, or even the same as in the training data. These models also do not have any direct access to the documents they were trained on, but only the patterns between words that the model learned from these documents. So, it is not possible to trace the source of information in texts produced by a generative language model.

Despite these limitations, language models have many useful applications, such as as text translators or chatbots. Currently, research efforts are focused on better understanding the representations learned by such models, as well as models combining textual data with images or sound (so-called multimodal language models).

Projects within this research theme​

Improving Language Model Bias Measures

Many researchers develop tools for measuring how biased language models are; in this project we work on improving these tools
Language models White
Bias White
Theory-driven White
Ongoing

From Learning to Meaning

In this project we explore whether Language Model’s generic sentences can teach us something about how people express stereotypes.
Language models White
Bias White
Ongoing

Can NLP bias measures be trusted?

van der Wall, O., Bachmann, D., Leidinger, A., van Maanden, L. Zuidema, W. & Schulz, K.

Automating the Analysis of Matching Algorithms

Endriss, U.

Participatory budgeting

Improving Language Model bias measures

Explainability in Collective Decision Making

Papers within this research theme

Quantifying Context Mixing in Transformers
Reclaiming AI as a theoretical tool for cognitive science
Dealing with semantic underspecification in multimodal NLP
How robust and reliable can we expect language models to be?
Stepmothers are mean and academics are pretentious: What do pretrained language models learn about you?
Can NLP bias measures be trusted?