A Brief History of AI: Where We Come From – 1940 to 2020

A BRIEF HISTORY OF AI: WHERE WE COME FROM, WHERE WE ARE, WHERE WE ARE GOING!

(I was given the mission to tell the story of Artificial Intelligence (AI), from its origins to trends in the area. Despite working with AI as a researcher for 25 years and as an entrepreneur for 16, the question always arises about the format of this narrative. I chose to bring our readers the history of AI based on some relevant milestones for the area. Of course, this narrative will leave important parts out (because the idea is not to make a treatise on AI), but I hope to bring some events and comments that help us understand the evolution of the area, why some things happened and where it is heading. So, let’s go!

EPISODE 01 – WHERE WE COME FROM: 1940 to 2000

The brain remains the most fascinating biological structure known. The ability of the human brain to create and transform is superior to that of any other, but we are still unable to explain many of its basic functions, such as emotions and the emergence of ideas. The way the brain works and makes decisions was also the main inspiration for the emergence and development of AI. In the early days of the area, initiatives followed one of two paths. There was a line of work that simply tried to imitate our way of making decisions based on condition-action rules, and another way that sought to build mathematical-computational structures inspired by the brain and other natural systems. The first line we call top- down approaches, focused on programming our decision-making process based on how we believe it works, and the second is bottom-up, that is, it aims to build an AI that is capable of learning, emergent way to make decisions.

In 1943, researchers Warren McCulloch & Walter Pitts proposed the first mathematical model of a biological neuron capable of performing elementary logical operations, but sufficient to transform it into a universal computer, that is, a machine capable of performing any processing that can be expressed by a sequence of steps, an algorithm. This universal computability was proposed by Allan Turing a few years earlier, in 1936 in his seminal work on Universal Turing Machines. In 1949 Donald Hebb proposed a learning rule applicable to these brain-inspired processing structures and in 1950 Turing published another relevant work on computational intelligence. In the article entitled Computational Machines and Intelligence, Turing introduced what is now called the Turing Test, which proposes that for a machine to be considered intelligent, a human must not be able to differentiate between the machine and a person if they are inside rooms. closed spaces communicating in natural language with a person outside the room.

In the late 1940s and early 1950s, Norbert Wiener coined the term cybernetics, as the study of control and communication in animals and machines. Claude Shannon analyzed the game of chess as a search problem and Isaac Asimov published the famous three laws of robotics (1. a robot cannot injure a human or allow a human to come to harm; 2. robots must obey the human orders, except where those orders conflict with the first law; and 3. a robot must protect its own existence, as long as it does not conflict with the previous laws).

Of all the events that contributed to the birth of the area, the Summer School in AI at Dartmouth College in 1956 was remarkable and is considered by many to be the cornerstone of the area. This workshop brought together several pioneering researchers, such as John McCarthy, Marvin Minsky, Nathanael Rochester and Claude Shannon. During the years following summer school, the pioneers created an over-optimism, with bold promises of universal problem solvers (RUP) and humanoid robots. However, the lack of abundant computational resources, the absence of data and the lack of knowledge of effective algorithms for building AI models ended up discrediting the area, which became known as the AI winter. During this period, resources for research in AI became scarce and the credibility of its real innovative potential was shaken.

Despite this, important advances have occurred, such as the writing of the first program to play chess by Arthur Samuel in the early 1960s, the creation of the Lisp programming language by John McCarthy in 1958 and the publication of the book Perceptrons by Marvin Minsky and Simon Papert. in 1969.

The AI stagnation period lasted until the early 1980s, when some new developments marked the area. The two volumes entitled Parallel and Distributed Processing (Parallel distributed Processing), organized by James McClelland and David Rumelhart, brought together many works involving, mainly, neural networks that allowed the area to take a big leap. Among these we can highlight the reinterpretation of the algorithm for training models of multilayer networks with positive propagation, popularly known as backpropagation algorithm, and algorithms for training recurrent and unsupervised networks. These volumes also brought the works that allowed the emergence of networks capable of generating distributed representations of words and recent models of deep networks.

(It always strikes me how sometimes some ‘pearls of science’ are hidden in the literature and are only discovered or developed years later.)

Between the mid and late 1980s, AI went through a new downturn, which became known as the second AI winter, mainly due to the decline in the use of LISP language by the industry and the inability of expert systems to be effective (approaches top- down).

In the years leading up to the turn of the millennium there was an explosion in AI research, particularly the techniques that came to be known as computational intelligence, which includes artificial neural networks, evolutionary algorithms, and fuzzy systems. At the same time, machine learning emerged, characterized by algorithms focused on learning from data. Coincidentally, these two areas officially emerged in 1994 with the publication of books and the creation of major technical-scientific events. But the difference between the nomenclatures is a subject for another post.

Let’s close this first episode by listing some more significant advances in AI in the period, such as virtual reality, behavioral robotics, AI applied to games, Deep Blue’s victory over Garry Kasparov, the creation of the RoboCup (robotics football Olympics) and, of course, the emergence of web crawlers as tools for information retrieval in web environments.

See you in the next episode, see you soon!

Compartilhe com sua rede