Essay by Ramon Hurtado
“[A.I.] could be terrible and it could be great – it is unclear. But one thing is for sure: we will not control it.”
– Elon Musk
Artificial intelligence (A.I.) has been at the forefront of futurists’ discussions for many decades. The term was originally coined in 1956 by American scientist, John McCarthy, and has since been defined as: the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. Science fiction has depicted a full range of potentially positive and negative outcomes of machines with A.I. The positive examples resembling the friendship friendly and assistive droids like R2-D2 and C-3PO (Star Wars); and the negative examples resembling the killer T-800 cyborg (The Terminator). Both examples of A.I. require a high level of sophistication known as artificial general intelligence, or A.G.I.; which can be defined as: the intelligence of a machine that could successfully perform any intellectual task that a human being can. A successful machine would possess the ability to examine a dataset, carry out an accurate analysis of the data, then integrate and apply that information to its assigned tasks. Essentially, the machine would need the ability to learn. This is where much of the current work in A.I. is being focused, into the sub-field known as machine learning.
Google’s DeepMind project is one of the more prominent A.I. systems known today. Using its AlphaGo programming, the A.I. system famously competed in a game of Go, an ancient Chinese board game with 10 ^170 possible moves, against one of the world’s best players and came out victorious. AlphaGo accomplished this by using machine learning to simulate all possible moves and apply them to situations throughout the game. More recently, one of DeepMind’s A.I. programs taught itself how to walk in a virtual simulation. Multiple avatars were created, including two-legged and four-legged models. The program was given an incentive to go from point A to point B, and left to figure the rest out on its own. Eventually, the program learned to move and found its way to the finish line. The simulations also included obstacles like hurdles and gaps, which lead to the A.I. learning to jump and step up onto objects. While the movements were not always graceful, they were effective in accomplishing its task – a small but important feat on the pathway to A.G.I.
At the consumer level, simplified forms of machine learning are present in our everyday lives. Online retailers and music streaming services are two of the most commonly seen examples of machine learning. Online retailers collect data based on your purchasing habits in order to recommend products you may be interested in buying. Likewise, music streaming services use algorithms based on your music preferences to recommend new artists and curate personalized playlists. These applications of machine learning all amount to pattern recognition, lacking a true understanding of the data they are analyzing. Although these may be less-complicated algorithms when working alone, combined together they are able to take on more complicated tasks.
The human brain is a complex system of neural networks capable of multi-tasking and rewiring itself as it learns new information. Most of today’s A.I. programs are task-specific or focused applications. Therefore, while AlphaGo may be able to best human players in a game of Go based on its specific focused application, it would not have the same success in another game which it was not initially programmed to learn. In the future, an A.I. system that has achieved artificial general intelligence will theoretically be able to learn the rules and master the strategies of any game without specifically being programmed to do so. Current efforts to mimic the human brain involve creating artificial neural networks which layer multiple algorithms capable of handling more complex tasks. One of DeepMind’s recent accomplishments involved its AlphaStar program, an A.I. system programmed to play the popular computer game, StarCraft which contains 10 ^1685 possible moves. AlphaStar studied a large dataset of moves made by players in previous games; then, simulated 200 years of StarCraft matches against itself. While it was still focused application programming, it demonstrated that it was able of taking on a more complex game and mastering it.
We are in a crucial phase on the pathway to artificial general intelligence. Without sophisticated machine learning, the self-aware A.I. of science fiction lore will not be possible. Though machine learning is still in a state of relative infancy, ambitious projects like Google’s DeepMind are showing tremendous promise as they create artificial neural networks capable of completing increasingly difficult tasks. The results of current A.I. experiments demonstrate its highly effective problem-solving capabilities. It is difficult to estimate when we will reach A.G.I., but progress is being made at a consistent rate, and moving forward, A.I. will play an increasingly important role as it becomes ever more integrated in our everyday lives.
full article published in PLASMA magazine 5