Algorithms

in Artificial Intelligence

An "algorithm" is traditionally defined as a set of instructions for performing a task.
Algorithm Plum

Browse research content

These are samples of research projects, papers or blogs written by our researchers.

More about Algorithms

Increasingly, we are confronted in our daily lives by decisions made by machine learning models. There is a growing concern about a lack of transparency and potential biases embedded in these systems, as well as a lack of accountability and fairness. One may wonder how it is possible for such models to be biased, given that they are algorithms and algorithms are simply sets of rules or instructions. In this post, we disentangle the relationship between algorithms and machine learning models to answer this question. We also discuss different ways in which algorithms play a role in machine learning.

What is an algorithm?

An algorithm is traditionally defined as a set of instructions for performing a task (such as sorting a sequence of numbers, multiplying two matrices of numbers, or finding the shortest path from A to B on a map). A traditional methodology in software engineering consists of (i) formalizing the problem at hand, (ii) finding a solution involving suitable data structures and algorithms, and (iii) implementing the solution in a suitable programming language. Thus, an algorithm is a recipe for solving a problem, and computer programs are implementations of these recipes.

 

One simple example of an algorithm is a decision tree. Here is a toy example of a decision tree for deciding if a loan application should be approved or denied:

1. Does the applicant have a credit score above 690?

  • If yes, approve the loan.
  • If no, move to the next question.

2. Is the applicant’s annual income above $100,000?

  • If yes, approve the loan.
  • If no, move to the next question

3. Has the applicant been continuously employed for the last two years?

  • If yes, approve the loan.
  • If no, deny the loan.

 

Typical properties that may come to mind when thinking of algorithms include the following:

    • Unambiguous (the instructions do not leave room for different interpretations)
    • Deterministic (if an algorithm is applied repeatedly on the same input, it yields the same output each time)
    • Objective (since algorithms are fully specified by a set of unambiguous instructions, the output cannot be affected by subjective factors such as the intuition or bias of a human decision-maker)
    • Explainable (one can explain the output of an algorithm, on a given input, by pointing to the sequence of instructions that were performed)
    • Machine-Executable (algorithms are comprised of instructions that can be performed by a physical machine)
    • Resource-Efficient (algorithms are typically designed to use a minimal amount of resources such as time, space, or network bandwidth)

 

In practice, many algorithms do not fulfill all these properties. For instance, algorithms are often only partially specified and/or may involve non-determinism (e.g., randomness or even quantum computation). Algorithms may be performed by humans or institutions rather than physical machines and/or may involve interaction with such outside entities (think of “social algorithms” such as voting schemes or strategies in a game). And algorithms may not be “correct” or it may not even be clear what qualifies as “correct”. Therefore, the above list of properties should not be seen as defining features of algorithms, but as guidance for when the use of algorithms is feasible and appropriate. 

Machine Learning

Machine learning (henceforth ML) refers to a collection of techniques for making predictions based on data. We will focus here on supervised ML, where a “model” is trained from labeled data (e.g., in the case models for predicting recidivism, data about past convictions, including relevant features such as criminal history, socio-economic factors, education, employment history, and mental health indicators, as well as the label “yes” or “no” which indicates whether the person ended up committing a new offense). Once a model has been trained, it can be applied to predict the label for other, previously unseen inputs (e.g., predict recidivism based on the aforementioned features). Graphically, we can depict this as follows:

The ML techniques operate on labeled datasets and output a ML model. This ML model itself can be executed on inputs to predict corresponding outputs. An ML model can take many shapes, ranging from simple ones such as decision trees to complicated ones such as deep neural networks.

 

What is the relationship between ML and algorithms? 

One can view machine learning as an alternative to traditional algorithmic approaches: Instead of constructing an algorithm that maps inputs to outputs, by applying ML techniques we can “learn” an ML model that maps inputs to outputs. In this sense, ML models provide an alternative to (manually designed) algorithms. What makes ML attractive is that it can take advantage of (possibly large amounts of) existing labeled data, and that it can overcome some of the traditional limitations of algorithms such as requiring a complete understanding of the problem and having to think through all possible circumstances and corner cases. On the other hand, some of the downsides of ML is that it tends to not be resource conscious: Depending on the techniques used, the ML model may be very resource-intensive. Furthermore, while simple ML models such as decision trees may offer the same kind of explainability as manually designed algorithms, more complicated ones do not – their inner workings are too complicated for us to analyze. This leads to interesting problems (both conceptually and technically) such as: How can we tell whether such a complicated ML model “implements” (in some approximate way) an algorithm? In addition, while it is true that an ML model is objective in a local sense (the output it produces for a given input is fully determined and does not depend on human intuition or bias), if we look at the larger picture, the labeled data from which the ML model was trained may be, and often tends to be, affected by the same undesirable human factors, which may affect the ML model and its predictions. 

 

This provides one angle on the relationship between ML and algorithms, but in fact, the two are intertwined in many different ways, of which we will mention several here. For one, the “ML techniques” we referred to earlier (which include techniques such as backpropagation and stochastic gradient descent) are themselves algorithms, and some of the remarkable progress that we see in modern AI technology stems from improvements to these algorithms. It is worth noting that, although we can think of machine learning (i.e., the process of learning an ML model from data) as an application of algorithms, this does not mean that the traditional attractive features of algorithms described above, such as explainability, apply here, because in this case the input of the algorithm is not just one input (think: one loan application) but the entire training data. Secondly, modern large language models (LLMs) can produce code as output (in different programming languages). This leads to a whole new set of challenges. For instance, how can we check that code is correct when it was not constructed by implementing a carefully constructed algorithmic solution, but was produced by an LLM? Finally, a big part of the daily work of a machine learning engineer involves tweaking the specifics of ML architectures and techniques (called “hyperparameters”). For example, in an attention-based LLM, we may change the number of layers, the number of attention heads, attention temperature, etc. Algorithmic approaches are being proposed and used for the meta-problem of efficiently exploring all tweaking options – yet another way in which algorithms and ML interact (sometimes called automated machine learning).

Projects within this research theme​

Automated Reasoning for Economics

In this project, we develop algorithms to support economists who try to design new mechanisms for group decision making.
Responsible White
Fairness White
Algorithm White
Theory-driven White
Ongoing

Explainability in Collective Decision Making

In this project, we develop methods for automatically generating explanations for why a given compromise (between disagreeing people) is the best available option.
Fairness White
Explainable White
Algorithm White
Ongoing

Can NLP bias measures be trusted?

van der Wall, O., Bachmann, D., Leidinger, A., van Maanden, L. Zuidema, W. & Schulz, K.

Automating the Analysis of Matching Algorithms

Endriss, U.

Participatory budgeting

Improving Language Model bias measures

Explainability in Collective Decision Making

Papers within this research theme

Reclaiming AI as a theoretical tool for cognitive science
Automating the Analysis of Matching Algorithms