Introduction to AI

This chapter aims to introduce AI as the concept itself, its context along history and what it is now.

What is Artificial Intelligence?

At this moment in time, Artificial Intelligence doesn’t have an exact description, even when it is a field with a relative long history. The main problem when defining Artificial Intelligence is that it implies that there is a Natural Ingelligence, so, when talking about humans we can think we have this characteristic because we can manipulate our reality, but animals can do that too, in a different and more passive way, but they do. That’s a question we have to keep wondering about, because there is no exact answer.

One person that kept it simple, was Alan Turing, who proposed The Turing Test. This would allow us to decide if a computer is intelligent. The test consists in having a human talking to an entity on another room, the entity can be other human or a machine. If the person making the test can not differenciate if is having a conversation with a human when talking with the machine, then we can conclude that the machine has artificial intelligence.

The machine that is intelligent must have the abilities to recognize the way a human talks, to learn through time and to be able to represent abstract concepts; but most importantly, it must be able to adapt itself to its enviroment. How can a computer learn? The answer is that it has to be able to store information, retrieve it, give a meaning to it, label it and understand what it refers to.

There are other scientists and engineers that claim true artificial intelligence is when a computer is rational. But this gives the idea of the computer being efficient, not human like. We, as humans (if you’re not a scrapper), are not necesarily rational or logical, and understanding consciousness is still far from our knowledge. In any case, this paradigm focuses in creating agents that make decisions based on limited time and knowledge, with a very human characteristic: collaboration, these are called multiagent systems.

At the end, whatever algorithm we choose to implement artificial intelligence, we take human behaviour and try to emulate it.

Historical Perspective

Warren McCulloch and Walter Pitts gave the kick off to this young science in 1943 thanks to their work in which they proposed the first model of artificial neural network.

AI was not widely recognized, until 1950, when Alan Turing published his article Computing machinery and intelligence. Years later, Turing would co-author the first program capable of playing chess.

The Artificial Intelligence term was first used by John McCarthy, who also build the computer language LISP, making him one of the pioneers in the modern AI age.

After some ups and downs that made the field appear and disappear from the interest of people who studied computer science, in 1980, people discovered that it could be applied to industry, and naturally, the IA was brought back to life. The lack of interest in the cold age of AI had a lot of reasons, but one of them could have been because of the high standards in the field, people were expecting thinking machines, not expert systems in a single task, but thanks to the efforts of different communities, neural networks were re-born and after that, the field was open to a lot more different applications and it has not stopped ever since.

Present and Future

Currently, there are many universities that do research in this field, but more and more companies like Google, governments and other institutions invest large amounts of money. Part of the reason why this happens lies on the internet, whose large amount of information facilitates access to large amounts of analysable data and at the same time demands new techniques to handle such quantities of information.