The Story Behind the All-pervasive AI

2
7194

The invention of the digital computer has been one of the defining moments of the modern era. It all started off by building a machine that could follow commands to the letter. We have come a long way since then. This article talks about a few other tech developments that can be termed defining moments in human history – artificial intelligence, machine learning and deep learning.

In 1950, a man called Alan Turing posed the question, ‘Can machines think?’ in his paper titled, ‘Computing Machinery and Intelligence’, and the world has never been the same ever since. The general consensus is that this was the first step into the world of artificial intelligence (AI). It was in this paper that Turing posed his now popular Turing Test, also known as the Imitation Game (there’s now a popular movie by that title). The term ‘artificial intelligence’ however, was yet to be coined and widely used.

Time rolled by and the language called C was invented between 1969 and 1973 in Bell Labs. This led to a new kind of revolution. We could now give machines a step-by-step list of instructions, which would faithfully be carried out by them. This was the period during which the Internet was born and nourished. These events led to the programming profession evolving to the state it is in today.

The task of a programmer is to understand a real-world situation, define inputs to a program (along with the program itself) and then write out that program in some programming language. As long as you can write down a list of instructions in the sequence in which tasks need to be performed, a computer can follow those instructions. Sometime later, John McCarthy came onto the scene with the Lisp language and coined the term ‘artificial intelligence’. Lisp was a different kind of programming language altogether. Readers who have the time could read up more about this language.

Soon, people began investigating the kind of problems that programming could solve. Problems that needed intelligent decisions to arrive at a solution came to be known as AI. This field grew, incorporating functions like search, planning, pattern recognition, classification, causality inference and so on. It was thus that AI came to be a field to be studied, whose implementation on digital computers would accomplish tasks that were considered ‘intelligent’. The evolution of this branch of technology was amazing, as it was programmability and reliability that put humans into space (apart from other tech developments that played major roles).

The problem arose when the task to be accomplished by a machine was something that humans did, but which did not seem to follow a step-by-step procedure. Take sight, for example; a normal human being is able to detect objects, identify them and also locate them, all without ever being ‘taught’ how to ‘see’. This challenge of getting machines to see has evolved into the field of computer vision. Tasks for which the steps to reach completion were not obvious, were difficult for machines to perform. This was because of the nature of programming. Programmers needed to break down a task into a series of sequential steps before being able to write down instructions for the computer to follow in some languages like C. Machine learning (ML) was a big break from conventional programming. You no longer needed to know the steps to solve a problem. All you needed were examples of the task being done, and the system would develop its own steps. This was amazing! As long as you could find the right inputs to feed to the system, it would discover a way to get the target outputs.

This model was applied to a lot of places over time. Rule based systems came to be popular in medical systems. Bayes Theorem dominated the Internet sales business. Support Vector Machines were beautiful machines, which worked like magic. Hidden Markov Models were very useful for stock markets and other ‘time series-like’ things. All in all, the world benefited from these developments.

There was still a problem though. The advances were few and far between. Take for example the task of recognising objects in images. Before ML, all efforts were being put into developing a better sequence of steps for recognising objects in images. When AlexNet burst onto the scene, the best performing algorithms had errors of approximately 26 per cent. AlexNet, however, had an error of almost 16 per cent, which was a major leap forward. Object recognition from images has now reached super human performance levels. What changed was that instead of asking a computer to follow a certain list of steps (a program), the computer was asked to find its own steps (a neural network, an SVM or a random forest are programs in some senses) after being given a lot of examples of what it was supposed to do. The problem was with finding the right set of inputs.

Continuing our discussion of the image recognition task, people were feeding a bunch of features to classifiers like SVMs and logistic regression models. But these features were generated by humans and were not good enough for the task at hand. Programs like SIFT, HOG, Canny edges, etc, were developed to work around this, but even these were not good enough. What AlexNet introduced was the ability to learn the correct representations based on some task from the most basic input available, namely, the pixels. This was deep learning — the ability to build representations and use them to build other representations. Deep learning is not limited to neural networks, as a lot of people believe. Deep SVMs (arccosine kernel) have been developed along with Deep Random Forests (gcForest). In any task, if you need to employ deep learning, first ask yourself if there is a low level input that you can provide? In the case of language based tasks, it’s words; for images, it is pixels; for audio, it is a raw signal and so on.

There are still many misconceptions about these fields in the public mind, especially because of misinterpretations by the popular media. One of the main reasons is that, typically, reporters either don’t bother to read the tech papers that are published before reporting on them, or fail to understand them completely. This leads to unfortunate rumours like Facebook’s AI scare (the Deal or no Deal paper). Again, we only hear of the major changes in the field from the news channels and not the slow build-up towards a particular breakthrough. Sometimes, the people claiming to be ‘experts’ on the subject and brought in to discuss issues on news channels may not have kept up with developments in the field. They cover up their outdated knowledge by useless romanticisation of the things being discussed. This further hampers the public mind’s ability to grasp the true developments in the field of AI.

Most professionals in the AI field have started out by working with the various tools even before they have had a chance to understand and learn the algorithms behind AI or ML. This has led to the learning of misleading concepts and, in many cases, outright wrong practices. The Indian industry suffers from the classic case of resume-building in this field. People who have used the tools once or twice claim to be proficient in the field, whereas it takes much more than that to master the basics here. There is no doubt about the advantages AI, ML and DL bring to the table. What is in doubt is the ability to train people who can use these technologies.

As of writing this article, the best place to start is Coursera’s AI/ML courses. Most, if not all, content there is world class. If that does not slake your thirst, MIT OpenCourseWare on YouTube is also a wonderful learning place. Then there is our very own NPTEL available on YouTube which offers courses on AI. All things considered, if one gets the opportunity to learn about AI from the people who invented it, one must grab it with both hands.

2 COMMENTS

  1. Artificial Intelligence and Machine Learning are most trending technologies in recent times. Many leading companies have started their research on this technologies. Thank you for sharing your views.

LEAVE A REPLY

Please enter your comment!
Please enter your name here