IA: An extraordinary destiny that began in the 13th century

Artificial intelligence here, artificial intelligence there… This term is mentioned almost everywhere, but do you know where this scientific discipline comes from and what has been its path until today? A recent conference given by the French Association for Artificial Intelligence (AFIA) evoked this extraordinary destiny.

Most of us first heard of artificial intelligence in some sci-fi movie, but it is indeed a discipline in its own right with a history dating back some sixty years. However, artificial intelligence has been part of contemporary culture since Asimov’s robotic trilogy or Hal’s misdeeds in a Space Odyssey!

A real source of fantasies, the AI ​​is also frightening to the point that “even Bill Gates or Stephen Hawking have spoken of their fears of seeing an artificial super-intelligence become the source of indisputable decisions”, explains Joël Quinteton, professor emeritus at the University of Montpellier III for the magazine Sciences et Avenir.

We could consider that the first « dream » of artificial intelligence was made much longer than the last century:

“In the 13th century, already, the philosopher Raymond Lully aspired to build a logical machine capable of responding to philosophical problems, which he named Ars magna. In the 18th century, the book L’Homme Machine by Julien Offray de La Mettrie was published, which was to influence an entire era, as evidenced by Vaucanson’s automatons such as his famous digesting duck. »

This « directing duck » by Vaucansson faithfully reproduces the appearance of the animal, but inside, a device mechanically simulates organic digestion (see below).

If today AI is based on deep learning and neural networks, these processes did not arrive served on a plate. To get there, it was necessary to pass stages such as the recognition of forms such as writing, image or speech. It is possible to quote Eliza, the “first conversational agent” designed in 1966 or game programs such as chess, Go or checkers which have long been part of the basics of research.

“As early as 1956, two researchers designed a Global Problem Solver dedicated to demonstrating mathematical theorems. It is also the ancestor of formal mathematical calculation software! This approach also gave rise to expert systems in the 1970s. notes Joel Quinteton.

Originality? Draw conclusions from a base of facts and rules. This approach is still the same today with regard to business information systems based on what are called “business rules”.

The discipline seemed to have disappeared in the 1990s, even though science fiction films evoked it. The reason would be above all economic, according to Jean Rohmer, director of research and partnerships at the École Supérieure d’Ingénieurs Léonard de Vinci: “AI research was expensive and brought little improvement to the user experience. »

Since then, man has wanted to design AIs close to him and strongly associated with language. The most important challenge lies in not being too dependent on the mass statistical processing of machine learning.

“Intelligence is not only problem solving, it is also the ability to manage exceptions well”, explains Jean-Louis Dessales, Professor at Télécom ParisTech, before continuing:

“It is on the basis of exceptions to a rule that humans assess the relevance of a fact, whereas current AI systems mostly link the notion of relevance to a statistical calculation. The primary difference? The simplicity of the solutions deployed by humans… quite a challenge for AIs, which often find a good result, but using very complex methods. »

This statement then takes on its full meaning when we reflect on the arrival of self-driving cars and other such applications that could greatly endanger humans in the case of poor assessments of situations.

Sources: Science and Future — Slate

Laisser un commentaire