Envejecimiento · 27 July 2021

What is Artificial Intelligence? Origin of intelligent systems

When we hear about Artificial Intelligence, we tend to automatically think of virtual personal assistants, driverless cars or algorithms that decide what advertising to bombard us with from our electronic devices. However, beyond these media and sometimes controversial concepts, do we really know what Artificial Intelligence is?

The Encyclopaedia Britannica tries to synthesise this broad reality in the following definition: "Artificial Intelligence (AI) is the ability of a computer or computer-controlled robot to perform tasks commonly associated with intelligent beings".

From a linguistic point of view, it does the job perfectly. However, it presupposes the clear existence of a boundary between an "intelligent being" and one that is not considered as such. One way to try to explore this boundary is to find out how computers traditionally considered as "unintelligent" have behaved from their beginnings until the recent emergence of AI, when the boundary towards the definitively "intelligent" seems to be crossed.

It was two centuries ago that the British mathematician and writer Ada Lovelace first postulated the concept of a "computer program", in 1843. Since this revealing moment, the underlying logic of computer programming has remained virtually unchanged until almost the present day. According to Ada Lovelace's idea, programming a computer, reduced to its essence, is nothing more than providing it with a set of instructions to be executed in a mathematically precise way. 

The human programmer devises a program with a clear objective and writes it in a language that the machine understands, known as a programming language. This program instructs the computer to execute a series of logical actions in a certain order and under certain conditions. This, no more and no less, is an algorithm. Any intelligence that this set of instructions can distil is, unless proven otherwise, originating in the mind of its creator, usually known as a developer or computer programmer. The computer merely obeys its commands.

An example of an algorithm is the set of instructions used by the computer of a space shuttle to execute in order the various steps necessary to rise to an orbital state above the earth. But it can also be the set of instructions that the CPU of our dishwasher executes to first wet, then lather, and finally rinse and dry the dishes, as well as the time spent on each of these tasks. 

The first algorithm run on an electronic computer was run on IBM's ENIAC, which is considered to be the first non-mechanical, or in other words, electronic, computer. The construction of the massive ENIAC began in 1943, a century after Ada Lovelace first hinted at the first conception of a computer program! And yet it executed, in effect, a set of logical instructions previously thought up by a human being.

From this 27-tonne behemoth to the first iPhone phone unveiled by Steve Jobs in 2007 weighing 135 grams, advances in component technology have enabled a dramatic reduction in size while computing power has increased exponentially and relentlessly year after year, with no sign of taking a break for the time being.

However, the vast majority of devices that have existed in the time span between the introduction of the giant ENIAC and the tiny iPhone have followed the same internal operating philosophy. We could say that these devices play the role of a chef who executes recipes on the cooker, while the programmers are the chefs who design and improve them over time. The intelligent being, the one who creates, is the human being.

That is why these obedient devices were never considered intelligent, even though they could perform millions of tasks and operations in a matter of seconds, unlike a human, due to their electronic nature and their great computational capacity. After all, their decision-making was limited to following a protocol, however complex it might be. 

Although AI has the aura of a novel and cutting-edge concept, as early as 1950, Alan Turing, considered the father of modern computing, posed the question "can machines think? In fact, in order to provide an objective answer, he also designed what is known as the Turing Test, which attempts to test the ability of a machine to exhibit human behaviour.

By way of temporal contextualisation, the first machine to pass the Turing Test was a conversational assistant that tried to imitate a Ukrainian teenager called Eugene Goostman. This happened in 2014, 64 years after its theoretical enunciation.  Despite this, Eugene's level of intelligence still seemed to creak under the discerning human eye, as can be seen in this interview with him by the Times magazine in 2014.

There are numerous examples of Artificial Intelligence systems over the last few decades that have made technological headlines, such as the well-known example of the Deep Blue program, developed by the American multinational IBM, which beat the then world chess champion Gary Kasparov in 1997. However, this programme, like so many others at the time, still used traditional programming. Numerous logical rules and conditions, as well as the ability to decide moves on the basis of complex decision trees, all executed on a large computational power. This is called "using brute force" in computer jargon. It tries to calculate as many possible future moves as the computer hardware allows, and a decision is made based on the expected outcomes of each option.

Although we know that this system uses brute force, the boundary between "intelligent" and "obedient" is not clearly defined in the field of Artificial Intelligence. Is a computer intelligent, capable of beating a human being in a complex game? Or is it simply a great job by its developers? In other words, was it simply a good execution of a succulent recipe designed by a great chef?

At the moment we find ourselves in this moment this indefinition is beginning to fade, and the developments that emerge every day fall with apparent clarity into the territory of the intelligent. Today, dystopian as it may seem, computers have the ability to learn by themselves. They no longer need a recipe millimetrically calculated down to the last detail. Instead, it is enough for them to see many different recipes until they have an abstract understanding of the concept of a recipe, so that they can create their own recipes. A chess-playing AI does not have an instruction manual like Deep Blue. It has simply watched thousands and millions of games until it has figured out the best way to do it. It is this that has marked a before and after in technology and implementation in the broad field of Artificial Intelligence.

The main direction in which today's AI research and development is heading is underpinned by the discoveries made by a number of great minds over the last two centuries. 

First and foremost, the expansion of the frontiers of knowledge of the brain provided to the world by Nobel laureates Santiago Ramón y Cajal and Charles Scott Sherrington. One of the main discoveries of these two eminent doctors was the description of the functioning and structure of the nervous system and, more specifically, the observation of the neuron as its structural and functional unit: their studies showed that the neurons of our nervous system receive a series of chemical and electrical impulses that are processed inside them and generate an output impulse that is communicated to the rest of the surrounding neurons.

Based on this, and in 1958, the American psychologist Frank Rosenblatt theorised about the possibility of defining a mathematical structure that would mimic the behaviour of a human neuron to perform complex calculations just as the brain does. This artificial neuron was christened the Perceptron. Rosenblatt also imagined adding multiple units of these perceptrons together, building a complex perceptron network, or as it is known today, a neural network. After all, our brains are structured in the form of numerous interconnected neural networks that are based on the structural unit of the neuron. Deep Learning algorithms, so called because of the great depth of these neural networks, are thus based on the structures of the human brain, which have been perfected over millions of years by nature through evolution.

The computational capacity to run these algorithms was not sufficient until the second decade of the 21st century, when theory moved into practice at breakneck speed. We will go deeper into the functioning of this type of algorithms in future posts. For now, it is sufficient to understand that these intelligent systems learn to perform specific tasks on their own. If we provide them with a sufficient amount of labelled data, for example, images of fish with their corresponding species, a neural network will be able to learn to distinguish fish species on its own. The programmer will not have to explain to the neural network what a fish is, or the differences between them. He will have to design a neural network powerful enough to do this job on its own. Neural networks have the ability to abstract complex knowledge from large amounts of data. In this way, they can automate tasks and even improve the accuracy of humans in performing them. One example is the Alpha Go system, developed by the company Deep Mind under the umbrella of Google. This AI was able to beat the world champion in Go, considered one of the most complex games today. To do so, its complex web of neural networks only needed to witness thousands of past games of this game to beat the best player alive at the time.

Currently, AI experts refer to two types of intelligence; general and specific. All AIs we have so far are of the specific type. This means that they are focused on learning to fulfil a goal autonomously and precisely, as much or more than humans. The key characteristic of this goal is that it is concrete and specific. Whether it is autonomous driving, detecting atherosclerosis in a medical image or conversing with a human about the train tickets he is not being able to buy, today's Artificial Intelligences based on neural networks are focused on very well-defined tasks and on achieving excellence in them.

On the other hand, humans have a general intelligence. We can perform more than one task, even simultaneously. In a single day we can cook, drive, work, hold conversations and read the newspaper without any of these tasks negatively affecting the performance of the others.  At present, an AI that is as versatile as a human being exists only in the dreams of developers and entrepreneurs around the world. The voices of certain experts such as the director of AI at the CSIC, Ramón López de Mántaras, believe that "however intelligent future artificial intelligences become, including those of a general nature, they will never be equal to human intelligences".

In short, we currently have systems that are clearly considered intelligent because they are capable of performing specific tasks with great precision and have managed to learn by themselves. They therefore meet the Encyclopaedia Britannica's definition of Artificial Intelligence. They are no longer constrained by an algorithm that lists the steps to be performed like a recipe. We have obtained adaptive systems that, sharing the same structure based on neural networks, can learn to drive as well as to study X-rays with no more information than a large amount of executed data, although not both at the same time, at least for now. Although these intelligences still have a long way to go to come even close to the capabilities of a human being, they have already proven to be of great help in a multitude of complex tasks and everything points to an increasing collaboration between humans and Artificial Intelligence to facilitate, accelerate and ultimately improve countless day-to-day processes.

Although in this entry we have reduced the examples to chatty teenagers or systems that learn to play more or less complex board games, this is by no means the real market utility of Artificial Intelligences and neural networks. Now that there seems to be more light on what Artificial Intelligence is and what it is not, we will be able to address in future posts how it is currently being applied in practically all economic sectors, the future forecasts for its development in the short term, and, in short, how it can help to improve everyone's lives.

Compartir 
Under the framework of: Programa Operativo Cooperación Transfronteriza España-Portugal
Sponsors: Fundación General de la Universidad de Salamanca Fundación del Consejo Superior de Investigaciones Científicas Direção Geral da Saúde - Portugal Universidad del Algarve - Portugal