You have probably heard about Artificial Intelligence (AI). It seems to be everywhere nowadays. Most technological companies seem to use the term frequently in their marketing, giving a sheen of science fiction, as if it was completely different from everything that came before. So what do these companies really mean when they say they use AI? Should we really be concerned that the Matrix or Skynet are just around the corner?
In science, AI is a well-defined field of computer science that has been in use for a very long time. The name is, perhaps, unfortunate, as it implies the creation of intelligent machines. Although that might still come about someday, what most companies mean when they talk about using AI is more prosaic. They typically refer to machine learning techniques. What machine learning does, in a basic manner, is classifying items based on data the computer has seen before. Imagine we show different pictures to a computer program. When the picture shows a cat, we tell the program that the image has a cat. If it has a dog, or a car, we tell it that there is no cat. We do this for many images. This is our “training set”. They are a set of known examples that we use to “teach” the program to differentiate and recognize the image of a cat. Then, we show the computer images it has not seen before, and it should be able to say whether there is a cat or not. In this manner, we have trained an AI to classify pictures of cats.
The cat´s example is a simple one, but the basics do not change much. You need a set of examples previously classified to build any AI and the quality of it will only be as good as the given data. Do you remember every time you were annoyed when a webpage asked you to type the numbers showed on the captcha before you can register? In some cases, the image had been classified and it was a truly test to check against bots. However, in many cases, the company is using you to classify the image. In similar ways, technology companies have been able to acquire a large volume of photos, audios and videos. These are usually uploaded through social media, and therefore come with a large amount of context. In this way, they can be fed to their programs, making them more powerful and better able to say what the image, audio or video is about.
One set of techniques that has been gaining quite a bit of attraction in the last years, and is behind the current push for AI, is “deep learning”. There is still some discussion on what to consider deep learning, and many of the techniques it uses are not new. However, everyone agrees it is possible nowadays because of the large amounts of available data. And it has proven to be very powerful in recognizing images and audio. For example, deep learning powers the intelligence of Siri , so that it has become better at recognizing what you say and also better at recognizing what you want. Google  and Facebook  also jumped on the deep learning bandwagon early, and now they have made their programs available to everyone. In many cases, whenever you see a recommendation/advertisement in a webpage, it is likely they are using this technique behind the scenes.
If the current push to AI scares you, don’t worry, you are not alone. Even technology giants like Elon Musk, from Tesla and SpaceX fame, have been public about their fears of AI . However, AI’s power is still limited. It is improving at recognizing images, understanding speech, translating documents and classifying videos. It might soon be good at driving a car. But the era of killing AI robots is far off. Just look at this video from a robotics competition in 2015 . They are so clumsy, you almost feel bad for them. So, to end this rambling piece, let us go back to the title. Is AI really a revolution or is it only hype? In my opinion, it is a bit of revolution with a lot of hype and the problem is in the name. Decades of science fiction have given us the impression of AI as an all-powerful computer entity. The current state of affairs is rather more modest. These technologies are real improvements; however, marketing makes them seem more impressive than what they really are. The future will probably bring amazing things made possible by these techniques. But that future is still far away.
By Dr. Antonio de la Vega de Leon. Postdoctoral Research Associate at the University of Sheffield. SRUK Yorkshire Constituency.