The history of AI in 33 breakthroughs: the first “thinking machine”

Many AI stories begin with Homer and his description of how the crippled blacksmith god Hephaestus made himself self-propelled tripods on wheels and “golden” helpers, “seemingly like living young women” who immortal gods have learned to do things.”

I prefer to stay as close as possible to the notion of “artificial intelligence” in the sense of intelligent human beings actually creating, and not just imagining, tools, mechanisms and concepts to assist our cognitive processes or to automate them (and imitate).

In 1308, Catalan poet and theologian Ramon Lully completed Ars generalis ultima (The Ultimate General Art), further perfecting his method of using paper-based mechanical means to create new knowledge from combinations of concepts.

Llull devised a system of thought that he wanted to pass on to others to aid them in theological debates, among other intellectual pursuits. He wanted to create a universal language using a logical combination of terms. The tool created by Llull consisted of seven discs or circles of paper, which listed the concepts (for example, the attributes of God such as goodness, greatness, eternity, power, wisdom, love, virtue, truth and glory) could be rotated to create combinations of concepts to produce answers to theological questions.

Llull’s system was based on the belief that only a limited number of undeniable truths exist in all fields of knowledge and that by studying all combinations of these elemental truths, humanity could reach the ultimate truth. His art could be used to “banish all erroneous opinions” and achieve “true intellectual certainty free from all doubt”.

Early in 1666, 19-year-old Gottfried Leibniz wrote From Arte Combinatoria (On combinatorial art), an extended version of his doctoral thesis in philosophy. Influenced by the work of previous philosophers, including Ramon Llull, Leibniz proposed an alphabet of human thought. All concepts are nothing but combinations of a relatively small number of simple concepts, just as words are combinations of letters, he argued. All truths can be expressed as appropriate combinations of concepts, which in turn can be broken down into simple ideas.

Leibniz wrote, “Thomas Hobbes, everywhere a profound examiner of principles, rightly declared that everything our mind does is a calculation.” He believed that such calculations could resolve differences of opinion: “The only way to rectify our reasonings is to make them as tangible as those of mathematicians, so that we can find our error at a glance, and when there are disputes between people, we can simply say: let’s calculate, without further ado, to see who is right” (The art of discovery, 1685). In addition to settling disputes, combinatorial art could provide the means to compose new ideas and inventions.

“Thinking machines” have been the common representation in modern times of new mechanical incarnations of these early descriptions of cognitive aids. Already in the 1820s, for example, the difference machine, a mechanical calculator, was referred to by Charles Babbage’s contemporaries as his “thinking machine”.

More than a century and a half later, computer software pioneer Edmund Berkeley wrote in his 1949 book Giant brains: or thinking machines“These machines look like a brain would be if it were made of material and wire instead of flesh and nerves… A machine can manage information; he can calculate, conclude and choose; it can perform reasonable operations with information. A machine can therefore think.

And so on, down to today’s gullible media, over-promising artificial intelligence researchers, very smart scientists and commentators, and some very wealthy people, all assuming that the human brain is nothing but that a “meat machine” (according to AI pioneer Marvin Minsky) and that calculations and similar computational operations equal thought and intelligence.

On the other hand, Leibniz – and Llull before him – were anti-materialists. Leibniz rejected the idea that perception and consciousness can receive mechanical or physical explanations. Perception and consciousness cannot maybe be explained mechanically, he argued, and therefore could not be physical processes.

In Monadology (1714), Leibniz writes: “One is obliged to admit that Perception and what depends on it is inexplicable on mechanical principles, that is to say, by figures and movements. By imagining that there is a machine whose construction would allow it to think, to feel and to have perceptions, one could conceive it enlarged while keeping the same proportions, so that one could enter into it as into a windmill. wind. Supposing this, one would find, by visiting within it, only parts that push each other, and never anything that can explain a perception. It is therefore in the simple substance, and not in the compound or in the machine, that we must seek perception.

For Leibniz, no matter how complex the inner workings of a “thinking machine”, nothing about them reveals that what is being observed is the inner workings of a conscious being. Two and a half centuries later, the founders of the new discipline of “artificial intelligence”, all materialists, assumed that the human brain is a machine, and therefore could be reproduced with physical components, with computer hardware and softwares. They believed they were well on their way to finding the basic calculations, the universal language of “intelligence”, to create a machine that will think, decide, act like humans or even better than humans.

This is when being rational was replaced by being digital.

The founding document of the discipline, the 1955 proposal for the first AI workshop, said it is based on “the conjecture that every aspect of learning or any other characteristic of intelligence can in principle be described with such precision that a machine can be designed to simulate it”. Twenty years later, Herbert Simon and Allan Newel in their Turing Award lectureformalized the goals and beliefs of the field as the Physical Symbol System Hypothesis: “A physical symbol system has the necessary and sufficient means for general intelligent action.”

Soon after, however, AI began to shift paradigms, from symbolism to connectionism, from defining (and programming) all aspects of learning and thinking, to statistical inference or looking for connections or correlations leading to learning based on observations or experiences.

With the advent of the web and the creation of vast amounts of data in which to find correlations, underpinned by advances in computer power and the invention of sophisticated statistical analysis methods, we have come to the triumph of ” deep learning”, and its contribution to very large improvements in the ability of computers to perform tasks such as image identification, question answering and textual analysis.

Recently, new tweaks to deep learning have produced AI programs that can write (“it’s like alchemy!” said one of the creators of the creative machine), engage in conversations (“I felt the ground shift under my feet…more and more like I was talking to something intelligent,” said another AI creator), and create images from text input, even videos.

In 1726, Jonathan Swift published Gulliver’s Travels in which he describes (perhaps as a parody of Llull’s system), a device which randomly generates permutations of sets of words. The professor responsible for this invention “showed me several volumes in large Folio already collected, of broken sentences, which he intended to piece together, and of those rich materials for giving the world a complete body of all the Arts and Science.”

So, Brute force deep learning at 18e century. More than a decade ago, when the new discipline of “data science” emerged, bringing to the fore the sophisticated statistical analysis that underpins deep learning, some observers and participants reminded us that “correlation does not imply causation”. A Swift today would probably add: “Correlation does not imply creativity.

Comments are closed.