THE NEW INDUSTRIAL WORLD FORUM 2017
« Artificial stupidity »
19-20 december 2017, Grande salle du Centre Pompidou
In the frame of European project NextLeap, ANR project Epistémè and Plaine Commune’s Chaire de Recherche Contributive
Within the scope of a global reflection on a new articulation of data processing within the data economy (reticulate artificial intelligence, deep learning, machine learning in general and intensive calculus), on one hand, and of the interpretation of this data and these processes, on the other hand, and within the present scientific context as well as within the exercising of citizenship and more generally of responsibility, this tenth edition of the New Industrial World Forum intends to analyse the impact of scientific instruments on the constitution of academic knowledge in a time when the technologies stemming from mathematics as applied to computer science and networks tend to establish themselves in the scientific world on the basis of efficiency criteria prescribed by the markets.
The result of this is an extreme and highly paradoxical threat as to the possibilities to practising, developing and cultivating scientific knowledge, if it is true that these shall not submit to the proletarianization processes emerging from the “black boxes” which the instruments and apparatuses are now becoming for the scientists as well as for the common people.
This year’s edition will be about analysing the problems arising from the use of digital scientific instruments in the sense that their functioning and the categorization processes they involve are becoming inaccessible, blind and impossible to formalize from a theoretical point of view. In opposition to Ian Hacking’s proposition that “a knowledge of the microscope” is “useless”, and in opposition to Chris Anderson who, in 2008, announced “the end of theory” in the age of big data, the point is to analyse, question and criticize the phenomenon of black boxes in the field of instruments and apparatuses in general and in the case of scientific instruments in particular in order to evaluate their epistemological cost as well as the benefits that can be expected from replacing this state of fact that is incompatible with the state of law without which science is impossible, and to prescribe, as much as it is possible, in the scientific fields concerned, instrumental models and practices allowing this replacement to happen.
These enquiries will be conducted by reference to Gaston Bachelard’s phenomenotechnics analyses and Gilbert Simondon’s mecanology, as well as through Edmund Husserl, Alfred Whitehead, Karl Popper and Jack Goody’s questionings and concepts, among other references. They have a generic value as to the questions asked by the expansion of reticulate artificial intelligence in all dimensions of human activity. This is why they will be conducted in the prospect of a more general reflection on the stakes of what is called artificial intelligence.
In the more specific case of social sciences, the question interferes directly with the practises of daily life: intensive calculus is implemented through the data economy and the platform capitalism generalizes these questions while highlighting the performative processes stemming from the speed of the information processing as the algorithms overtake all deliberative processes, both individual and collective. These factual evolutions still lack theorization but they have penetrated the markets through the “linguistic capitalism” (as it was described by Frédéric Kaplan) and the logistics of online sales and they now reached the boundaries of the so-called “medicine 3.0”, also called infomedicine, as well as urban management and of course robotised conception and production.
The main question asked by all these problematic layers, although they may seem heterogeneous, is that of the function of calculus. What benefits can be expected from it? Under what conditions can it serve deliberation (scientific, civic or both) and social inventiveness? What overdeterminations emerge from the data formats and architectures?
Provisional program
Tuesday 19 december 2017
10h-13h
Artifical Intelligence, Artificial Stupidity and the Function of Calculus
– Bernard Stiegler, philosophy (IRI)
– David Bates, history of science (Berkeley)
– Giuseppe Longo, mathematics and biology (ENS)
– Yuk Hui, computer science and philosophy (Leuphana Un).
14h30-18h
Data Architectures and the Production of Knowledge
– André Spicer et Mats Alvesson
– Bruno Bachimont (UTC)
– Benjamin Bratton (San Diego University)
– Christian Fauré, Octo Technology
– Bruno Bachimont (UTC)
– Aurélien Galateau (Besançon University)
Wednesday 20 december 2017
10h – 13h
Opacity of Scientific Instruments and its Epistemological Consequences
– Vincent Bontems, epistemology (CEA)
– Cédric Mattews, biology, CNRS
– Peter Lemmens, philosophy, ISIS
– Laurence Devillers, robotics, Limsi/CNRS
– Maël Montevil, biology (ENS)
– Laurent Alexandre
14h30 – 18h
Data Processing and Civic Contribution
– Paul-Emile Geoffroy, philosophy and social sciences (IRI)
– Jean-Pierre Girard, archeology (MOM)
– Thibaut d’Orso, Spideo company
– Johan Mathé, Bay Labs company (skype)
– Warren Sack, artist, software studies, UC Santa Cruz (skype)