The important evolution of artificial intelligence

Deep-learning networks are showing us not only improved ways of doing things, but entirely different ones, too

The important evolution of artificial intelligence

By Hans Albrecht, vice-president, portfolio manager and options strategist, Horizons ETFs

When I was a youngster, I loved the movie WarGames, an early 1980s cold-war thriller starring Matthew Broderick. At the time, the premise that a young and generally unmotivated youth could hack into the school database in order to change his grades was indeed exciting to me. The story gets much more serious when the protagonist believes he’s playing a computer game called Global Thermonuclear War, not realizing that he had actually found a back door to a NORAD supercomputer that is running these war simulations.

The simulations are misconstrued as real attacks by military personnel at NORAD and chaos ensues. I won’t ruin the rest, but I will say the movie was well ahead of its time in depicting many of the processes and overall philosophy in place in artificial intelligence ("A.I.") today. The computer in the movie learns and draws conclusions from self-play, a bottom-up deep learning neural net process that the best A.I. agents mimic.

Combining A.I. with game-playing – computer-based or otherwise – has become a go-to method in developing what are called reinforcement learning outcomes. Video games in particular are able to offer up countless complex narratives that can serve as a great training ground for A.I. So adept are today’s A.I. agents now that machines are quite literally learning by playing simulation games on their own. The rate at which A.I. systems are learning to play well, even in games like poker (which involve many behavioural subtleties like unknown cards and bluffing) is startling onlookers. Google’s Q-Network A.I. mastered 49 old Atari video games without even having been given any instructions on how to play them. But let’s dial back the clock before continuing.

In 1997, IBM’s DeepBlue super-computer beat the world’s greatest chess player Garry Kasparov, and Garry was none too happy about it. It was hyped as a man-versus-machine encounter that seemed to put the weight of humanity itself on Kasparov’s shoulders. It was, in fact, a rematch of a year-earlier battle in which Kasparov was the victor. The world then waited to see if DeepBlue’s developers could improve the system’s A.I. sufficiently in a year’s time. Indeed, the computer “took home the hardware”. It was impressive, to say the least, but it represented an earlier form of A.I. – one that I refer to as a top-down approach. The system was coded by great chess players and developers to recognize a set of rules and every possible move. There was no learning involved – just a ruthless and emotionless chess-playing engine that could evaluate 200,000,000 moves per second and explore up to 40 moves ahead.

Fast-forward to 2016 when the A.I. of Google’s DeepMind’s AlphaGo beat Lee Sedol, the world’s second-ranked professional player in Go – a Chinese strategy board game. The fundamental difference in AlphaGo is that it uses a bottom-up approach. It was fed a great deal of data in the form of a hundred thousand Go games, but a crucial difference that set it apart from previous A.I. was its incredible ability to learn. AlphaGo honed its skills by playing thousands of Go games on its own. Before 2016, no one had thought it remotely possible for a computer to beat a top player in the complex game of Go, but it happened.

The amazing thing about this approach is that the A.I. can learn to adapt to unexpected situations and therefore can potentially learn beyond the capability of what a human can accomplish. Deep-learning networks are showing us not only improved ways of doing things, but entirely different ones too. Go players have carefully studied the strategies that AlphaGo employed in the tournament, and they’re marveling at moves they had never even considered. Some of those players have vastly improved their Go rankings as a result. A.I. is now teaching us to perform better. It’s hard to believe that there could be moves that a Kasparov or Sedol haven’t contemplated, but I believe we only think this because we’re human and inherently skewed by our own humanness.

We are emotional creatures of habit, seeking the familiar even at a subconscious level. As much as we resist, we do tend to repeat behaviour, good or bad, effective or not. But AlphaGo doesn’t forget datasets. It doesn’t conform to prevailing popular methods. It doesn’t fear hurting its reputation should it misstep. It only cares about improving and winning. A generalist approach to A.I. is what will ultimately lead to improvements on a level that DeepBlue’s framework could never have possibly achieved. In fact, the newest iteration of AlphaGo only employs learning and with no previous dataset at all– it’s all about improving over time through learning. Learning from experience is a game-changer of the utmost importance to A.I. It will forever change the way we work and live our lives.

In 2017, Garry Kasparov, in a 180-degree change of heart, wrote that we should now embrace the A.I. revolution and the progress that it promises. I say we should invest in it too. FOUR and RBOT invest in the leading companies in the space, as well as in related beneficiaries of the theme – big data, smart robotics, autonomous vehicles, cybersecurity, healthcare and e-commerce applications. The next A.I. revolution has begun and investors should consider being a part of it.

This is a special promotional feature produced in partnership with Horizons ETFs.

The views/opinions expressed herein may not necessarily be the views of Horizons ETFs Management (Canada) Inc. All comments, opinions and views expressed are of a general nature and should not be considered as advice to purchase or to sell mentioned securities. Before making any investment decision, please consult your investment advisor or advisors.

 

LATEST NEWS