Artificial intelligence (AI) and machine learning are currently the subject of much discussion in the media. There is also a lot of talk about deep learning. But what do we mean by these terms and where are the differences? Where is AI already being applied, and what development is on the horizon? Will robots take over our jobs?
Last Thursday (May 11, 2017), the Rise of AI conference took place at the German Museum of Technology in Berlin. In a place where so much history of technical development is preserved, this day was supposed to be about the future.
A number of experts from different fields were invited to provide insights into the complex topic of "artificial intelligence" and to stimulate discussion among the numerous visitors from different industries.
Artificial intelligence does not yet exist
There have been presentations on different aspects of AI as it currently exists and how it might develop in the future, but they all agree on one thing - artificial intelligence in the true sense of the word (artificial general intelligence), i.e. a machine that has human or superhuman intelligence, does not yet exist. One can only speculate if and when this will come.
Nevertheless, there are machines that have long since surpassed humans in certain tasks (narrow artificial intelligence). This sounds like a contradiction, but there is an important point that has not yet been reached, namely to transfer the knowledge gained from solving one problem to another problem in order to solve that problem as well. This is the decisive milestone, which - if solved - can lead to a superintelligence, the so-called singularity. And this could happen much faster than we think.
Machine beats human
Artificial intelligence that outperforms humans in a particular task has been around for some time. Some disciplines have been deemed far too complex for a machine to compete with humans. But artificial intelligence has achieved impressive results in just such tasks on several occasions. "Deep Blue" from IBM managed to beat then world chess champion Garry Kasparov in some games in 1996 and 1997. "Watson" (also from IBM) managed to beat two of the winners of the American show "Jeopardy!" in 2011.
And in 2016, Google's "AlphaGo" dominated one of the world's best players of the board game Go - something experts had deemed impossible for the next 10 years. Google solved the task using large neural networks, a branch of machine learning also known as "deep learning." The program uses this technology to learn for itself how the game works and how to win it. In fact, it would be impossible to solve it with fixed program rules, since the number of possible moves in this game is far too large to do without "intuition".
In an exciting talk, Dr. Damian Borth recounted milestones in the development of Deep Learning and provided an outlook. He cited the application of Deep Learning to image recognition as one of the most decisive successes in the field, with which a team of researchers clearly outperformed all other methods. At the time, the network was much smaller. Today, companies like Google and Tesla are using prodigious amounts of data to solve tasks like autonomous driving.
As computing power has increased in recent years, deep neural networks have emerged as a tool to solve myriad problems with astounding accuracy. These networks are inspired by the human brain. They are made up of nodes, called neurons, that are connected to each other so that information can flow through. When a connection improves the end result, it strengthens it. And vice versa, if it makes the end result worse, that connection is weakened.
The interesting thing is that the system is able to learn and improve itself by trying different ways of solving the problem and "modifying" itself until it can no longer outperform itself. In the case of AlphaGo, somewhere along this journey of self-improvement, the software has surpassed the best human players.
But emphasized one more time: The system becomes good only at that problem solving at which it has been trained. When it encounters a new problem, it starts from scratch again.
Implications for the working world
Trent McConaghy raised an interesting point in his presentation: An artificial intelligence does not have to reach the status of artificial general intelligence or singularity to achieve comparable visibility in our lives.
Point one: many jobs today do not require superintelligence. They are already threatened by automation now, which would have significant economic consequences.
Point two: using the blockchain - a kind of distributed database that cannot be modified retrospectively - when coupled with AI, one is already able today to create a system that keeps improving and reconfiguring itself, that cannot be stopped, and that no one owns. If the task of this machine is to make money in a certain way (e.g., create images and then sell them), then the system will do so tirelessly and at lightning speed, accumulating the profit. So, provided the market will allow it, the first AI millionaires are in sight.
However, the media is also creating a lot of hype around the topic of artificial intelligence. The rising expectations for the systems do not necessarily have to be fulfilled. In his presentation, Dr. Danko Nikolic warns that this can lead to a new "AI winter", i.e. the abandonment or weakening of research efforts in this direction. It wouldn't be the first time something like this has happened. In the case of Deep Learning, the technologies on which it is based have existed since the nineties. But at that time, they had not lived up to expectations and it took a long time for them to come back into mainstream focus.
Ethical considerations
The question of whether artificial intelligence will influence our future negatively or positively is rather philosophical, but important. In today's world, which is driven by information and in which we are flooded with data, it represents a tool without which we would hardly be able to cope with this "flood. The automation of our jobs is also nothing new, but the continuation of a process that has been going on for a long time. If we look at modern agriculture or production lines in the automotive sector, it quickly becomes apparent that machines have long since taken the heavy and "boring" work off our hands. We are now reaching a stage in our history where even analytical and even creative work can be done by machines.
Some see this as a danger, others see it as a new opportunity and a future where we have new jobs that celebrate our "humanity." One thing is certain: it is no longer science fiction. We all need to join the discussion to drive the inevitable evolution of artificial intelligence in a positive direction.