The Mathematics Behind an Intelligent Robot
We call our species Homo sapiens (from Latin meaning wise human) because our minds are the most powerful model of intelligence known so far. Surprisingly, even with all that power bestowed upon us by nature, we still wonder how a tissue, composed of billions of independent molecules interacting following simple physical and chemical rules, can think, behave rationally, and develop consciousness.
The Greek philosopher Aristotle was one of the first to attempt to codify our process of reasoning; Aristotle developed a system of syllogisms that allowed for mechanically generating conclusions, thus driving the idea that there are laws governing the functioning of the mind and, therefore, it would be possible to build mechanical artifacts capable of thinking and acting on their own. The field of Artificial Intelligence seeks to build such artifacts; it’s a field whose ideas and viewpoints draw inspiration from various disciplines, for example: the existence of a physical mind and how to establish logical connections between goals and actions are matters of Philosophy; answers to why humans and animals behave the way they do have been extracted from Psychology; and the conclusion that a set of simple cells can lead to thought, action, and consciousness was achieved through significant advances in Neuroscience. On the other hand, to formalize these ideas, Artificial Intelligence develops algorithms based on four fundamental areas of mathematics: Logic, Computing, Probability, and Economics.
The logic developed by George Bool in 1847 underpins all intelligent systems today; this area of knowledge provides the reasoning mechanisms that, in theory, can solve any problem described in formal terms. However, there’s a big difference between solving a problem in theory and solving it in practice, so it’s important to understand the extent to which logic can go in real life.
To find the limits of logic, another area of knowledge comes into play: the theory of computation. Figures like Kurt Gödel and Alan Turing helped us identify which problems can be solved by a machine and estimate how many computational resources a problem can consume.
Despite all these advances, building intelligent machines remained a significant challenge because we live in a world that changes constantly and unexpectedly. For this reason, artificial intelligence uses many tools from probability theory, for example, Bayes’ theorem (yes, that thing you learned in high school and thought it only served to calculate how likely you are to get an ace of hearts given that you play poker with a full deck) is the mechanism that robots follow today to integrate evidence and build “models” of the world around them!
But wait, there’s more! For a robot to act on its own, it’s necessary to study how to make decisions that lead to acceptable outcomes. For this reason, artificial intelligence also makes use of knowledge in economics (how to maximize reward functions), game theory (how to act when third parties can modify the course of action), and operations research (how to make decisions when expected benefits are not immediate).
There’s a long way to go before computers can mimic all the skills of the human brain; however, significant strides have been made in the last decade, for example, advances in Deep Reinforcement Learning (algorithms that make use of massive amounts of data to enable robots to discover, through simple experimentation, how to perform activities that no programmer could teach them) are achieving feats once thought impossible, such as defeating a world champion of Go.