Reclaiming AI as a theoretical tool for cognitive science

Paper details

Language models White
Algorithm White
Theory-driven White
No projects linked to this paper

This paper provides theoretical results that indicate limits on what machine learning methods can achieve, even with all the data and computing power in the world.

The field of artificial intelligence, especially in its early days, has been interwoven with the study of human cognition. An important basic assumption that ties the two together is that cognition can be understood as a form of computation. This theoretical possibility is nowadays (in media coverage) often taken to imply that recreating human cognition in a computational system is also practically doable, and even that this is inevitable to happen in the short term.

In this paper, we show that this inevitability claim is unfounded. Using some recent breakthrough results in theoretical computer science, we show that there are learning tasks for which it is impossible for machine learning methods to perform significantly better than random. This holds unless there are some major mathematical breakthroughs on one of the most important and elusive problems in math and computer science, which has been open for more than half a century.

As a result, we argue, it is important for the study of human cognition not to get sidetracked by the claim that new AI methods will magically decipher human cognition for us. AI is an important tool in cognitive science, but it is only one of many tools.

Reference: Iris van Rooij, Olivia Guest, Federico G. Adolfi, Ronald de Haan, Antonina Kolokolova, and Patricia Rich. Reclaiming AI as a theoretical tool for cognitive science. Under submission, available as pre-print, 2023.

Other papers

Quantifying Context Mixing in Transformers
Dealing with semantic underspecification in multimodal NLP
How robust and reliable can we expect language models to be?
Which stereotypes do search engines come with?
Stepmothers are mean and academics are pretentious: What do pretrained language models learn about you?
Can NLP bias measures be trusted?