Friday 09 August 2019

Forms of ‘artificial intelligence’ or ‘learning machines’ are really nothing more than optimisation devices

The recommendation algorithms developed by Facebook are designed to maximise income from the platform. The credit evaluation models used by BNP are designed to minimise the risk of payment default. The facial recognition systems deployed by the Police are designed to minimise cases of mistaken identity. The uniqueness of these digital devices is still that they present themselves from the outset as optimisation devices. They cause engineers (and those who employ them) to deal with problems which we experience as mathematical problems which are suitable for minimisation or maximisation.

Let’s take the case of recommendation algorithms. Imagine the scene. Senior managers gathered in a plain and large office see in the recommendation procedure a unique opportunity to improve ‘user commitment’. The technical managers are subsequently instructed to define a set of ‘metrics’ to give these objectives a measurable form, both quantitatively and empirically. The engineers, ultimately restless in an open space, get on with developing a handful of recommendation ‘algorithms’ in order to use those which enhance the specified metrics.

The algorithms therefore exploit any collected data in order to recommend from amongst the mass of available content the content with which you are ‘most’ likely to interact. In this field, the ‘click-through-rate’ offers a standard procedure for measuring the capacity of algorithms to create such interactions. The metric in question defines the ‘click’ as a measure of successful interaction and the ‘number of clicks’ as a measure of the algorithm performance. The case is heard: the algorithm is optimal once its recommendations create a number of maximum clicks.

Contemporary optimisation devices, which people speak of as ‘learning machines’ or ‘forms of artificial intelligence’, mainly prolong companies mathematising human affairs which were set up half a century ago by engineers in operational research, decisional statistics, or mathematical economics. The very word ‘optimisation’, as it appears in our languages from the end of World War II in order to name these transformations, now imposes itself with the power and obviousness of a catchword for any situation which appears and could be imagined.

The concept of optimisation therefore indicates two likely outlets for submitting these optimisation techniques to a full-fledged policy discussion.

The first one is the possibility of expecting senior managers to chart metrics which their devices are instructed to optimise. Granted, metrics are techniques to the extent that they make it possible to know precisely what the effects of one device or another are (e.g. the number of clicks generated by one algorithm or another). But they are time and again policies, to the extent that they assume the introduction of a division between the important (e.g. the business income of Facebook) and the unimportant (e.g. any effects of misinformation). Optimisation invites us to discuss the purposes and to devise metrics which are likely to serve other objectives.

The second one is the possibility of discussing the capacities of these devices to treat problems which concern us. Optimisation devices (and there lies all their power and their limitation) simultaneously open and close the scope of situations for which they enable us to give an account. This involves a fact which urgent discussions most often fail to note: mathematical formalism sometimes runs aground where other investigation procedures (e.g. discussions or interviews, etc.) or intervention procedures (e.g. demonstrations or controls) succeed. Optimisation invites us to question the capacity of some mathematical device or other to enrich or impoverish our ways of thinking and to act in those situations which matter to us.

The concept of optimisation, if one hears only the ethical (bonus) or political (optimum) significance recorded in the very letter of its ancient root, therefore offers a unique asset to resist discussions which would try to create mathematically adducible problems, the only problems worthy of being addressed practically.

Jérémy Grosman, Doctorant en Philosophie,

Centre de Recherche Information, Droit et Société (CRIDS) de l'Université de Namur (Unamur)