In 2018, at SXSW, Elon Musk called artificial intelligence more dangerous than nuclear weapons, and bemoaned the lack of governmental oversight. In response, Mark Zuckerberg said Musk’s doomsday scenarios for AI were irresponsible, as they could act as an obstacle to life-saving developments in tech. Where they could agree is this: artificial intelligence is the future—and the future is coming fast.
So what are we talking about when we talk about artificial intelligence? Is it the AI that can beat a human at chess, Go, or Jeopardy? Is it the AI that aids the early diagnosis of cancer and powers self-driving cars? Or is it like HAL from 2001: A Space Odyssey, quietly calculating the cruel means to its own end?
It’s all of that and a little more.
This is the frontier of both tech and philosophy, and it comes with many definitions. On a base level, artificial intelligence means a computer with functions that are able to mimic or surpass human capability. But even basic terms get slippery when trying to define something as complex as artificial intelligence. To break it down by functionality, AI generally falls into four categories: reactive, limited memory, theory of mind, and self-aware.
The oldest and simplest form of artificial intelligence is the reactive system. While they emulate the human mind in responding to certain stimuli, these systems have a singular purpose, no memory, and no understanding of the world outside of its function.
Deep Blue, the AI program that beat Grandmaster Garry Kasparov at chess in 1997, is an example of a reactive system. It understands the pieces on the board. It predicts possible next moves. And, based on a finite set of rules, it chooses what it believes to be the most profitable outcome. But it’s not learning from the past. It’s only seeing what’s in front of it, and reacting.
Reactive systems have no concept of the wider world, and some AI researchers, like Rodney Brooks, believe that this is the only type of AI that should be pursued, due to people’s inability to accurately represent the world in simulations. Reactive systems may not deliver the science fiction future that we both desire and dread, but they will be reliable and trustworthy applications that reduce the burden on humanity (when they aren’t schooling us at our own favorite games).
Take a reactive system and teach it how to learn from the past. Now you’ve got a limited memory system—a category that encompasses practically all of today’s working AI tools. By using deep learning, AI developers are able to train a limited memory system with large volumes of data that the system can then store for reference and use when problem-solving.
Self-driving vehicles are one iteration of a limited memory AI system. They utilize both observational and preprogrammed knowledge to react quickly in extremely dynamic environments. Before the introduction of limited memory systems, self-driving vehicles could take as long as 100 seconds to make a decision; today’s decisions are nearly instantaneous.
Systems that can draw on vast pools of knowledge are faster and better at calculating complex positions than humans are. This is the type of AI that Mark Zuckerberg is excited about, an AI that can reduce traffic deaths or improve cancer diagnoses.
But limited memory AI is unable to take into account factors that it hasn’t been trained on. The overeager use of AI as a sentencing tool in the criminal justice system, for example, has met fierce criticism, as any algorithm for justice is bound to be flawed and thus capable of unfair and potentially irreversible consequences. Meanwhile, in Arkansas, AI-powered navigation systems have been directing truck-drivers to routes that are quicker, but also inherently unsafe for vehicles of that size, leading to an increase in accidents.
To advance, AI needs to develop a broader understanding of the world around it.
We’re now dipping into the future, as true theory of mind systems have not been developed yet. Theory of mind systems will have a basic understanding of other entities and the forces that guide their behavior: their motives, their emotions, and their expectations.
Right now, we see theory of mind being mimicked, but not encapsulated, in the semi-social interactions of chatbots and voice assistants like Siri. But these systems are a long way away from a true theory of mind system.
Sophia, an AI-powered robot developed by Hanson Robotics, is an approximation of a theory of mind system. She’s able to imitate human gestures, facial expressions, and conversational tropes. Running on software that’s constantly able to retrain itself based on prior interactions, she’s given interviews with 60 Minutes and Forbes.
The creators of Sophia believe this form of AI can have applications in healthcare, customer service, education, and therapy. But while there’s plenty of AI at work in Sophia (emotion recognition, facial tracking, deep neural networks), it’s a long way from true theory of mind AI.
Welcome to the rabbit hole.
Self-aware AI systems are the conceptual end point of all AI development. Where theory of mind systems would be able to comprehend the factors related to human behavior, self-aware systems would recognize those factors in themselves. They’d have their own independent thoughts, desires, and emotions. Consider the difference between wanting an object and knowing that you want an object.
With an understanding of both itself and the outside world, this meta-level cognition would allow for self-aware AI systems to interpret and predict the behavior of other people (and other systems). This is the part of AI that Elon Musk thinks is more dangerous than nuclear weapons. Because a self-aware AI system would also have thoughts of self-preservation, which, combined with a god-like computational ability, could mean that humanity would be deemed irrelevant or even a hostile threat in such an entity’s metaphorical eyes. At the least, a self-aware system might not be too pleased with the way we’ve been treating our phones, our cars, and our planet so far.
If self-aware AI systems connected to each other, their combined capability would far surpass anything humanity could muster. Think SkyNet from the Terminator series. Think full-blown Singularity. While plenty of possible outcomes are not nearly as dire as that (some are even utopian), the fact that they remain possible at all is enough to present a credible existential threat to the human race.
There are two antidotes to this fear of super-intelligent self-aware robots taking over the planet. First, some of humanity’s best and brightest minds are working together, through projects like OpenAI, to take such threats into consideration and focus on ways to bring about a safe path to artificial intelligence. Second, and perhaps most reassuringly, self-aware systems are, by most experts’ opinions, still a long way away.
Data science, as described by University of California, Berkeley, involves the analysis and management of large quantities of data. The discipline requires professionals who can ask the right questions, chart out what information is needed, collect the data, and analyze it effectively.
Meet 25 leading professors of computer science, and learn more about what makes them standout educators and innovators.
Learn from the best. Meet 25 well-regarded software engineering professors who teach proven, best practice approaches to software engineering and testing.
Traditional forms of education are still important, but they can’t keep up with the rapid pace of cybersecurity. As soon as one form of threat is neutralized, innumerable others are developed. That’s why employers and employees are both increasingly turning to the more nimble world of professional certifications.
Software powers a large part of today’s world. From hailing taxi cabs to ordering food, there is an app for everything. As a result, there is a growing demand for software engineers to develop new applications and websites.