Does AI make mistakes?
An old saying goes: to err is human.
Well, if erring is human, and we’re making “brains” that will imitate the human brain, nothing more natural than these “brains” making mistakes too, right?
In fact, we’re going to have to update that saying to “to err is human and AI.”
But, speaking seriously, here’s what happens:
An automation doesn’t make mistakes.
When an automation makes a mistake, it’s because it has really failed in its purpose.
Some bug happened, something very wrong happened.
It’s like a calculator getting a math problem wrong.
An automation getting a math problem wrong is something that shouldn’t happen, and if it is happening, it’s because there is a very serious defect.
But an AI does make mistakes, and it makes them much more commonly.
In a way, we can even expect it to happen eventually.
Because when you do the exercise of thinking, it involves thinking about things you don’t know.
It’s an exercise in inexactness.
Remember the chess example I mentioned?
When there are few pieces left on the board, we already have automations that can map all possible movements and will never be defeated.
But in the initial stages of the game, there are so many possibilities that we can’t map them and build automations that will never make mistakes.
In this way, we have to use the human brain, or build AIs that think like humans and don’t map all possibilities, that eventually make some kind of mistake, but still play creatively.
And that’s enough to generate a good result.
With tic-tac-toe, the story is different.
Tic-tac-toe is a simpler game.
In it, we can make an automation that can never be beaten, because it maps all possible possibilities.
These are traditional computers, they are exact.
But complex problems, like playing chess or drawing a dog, cannot be solved by this exactness, because the possibilities are huge and impossible to map.
But when you set up an artificial intelligence, that is, when you build a machine that replicates human thought, something interesting happens.
It manages to solve these problems that traditional machines couldn’t solve, by thinking in a way that humans think.
But, just like the human brain, you have tactics to correct and avoid errors in AI.
For example, when we ask an airplane pilot to check a checklist of processes on a clipboard before takeoff, we are acknowledging that humans can make mistakes.
The clipboard, in this case, functions as a mechanism of analog automation, which helps the pilot follow all necessary steps, reducing the chance of errors from forgetfulness or distraction.
So, instead of simply hoping the pilot doesn’t make a mistake, we create a system that works around the limitations of human cognition.
The same should be done with AI.
You can’t expect AI not to make mistakes.
What you have to do is adopt ways to mitigate AI errors and tactics to work around them, already expecting that they will happen.
Whether using redundancy, peer review, automation mechanisms, training, more organized systems, environments less prone to errors, correct AI choices, among several other techniques and tactics that we will be discussing in detail in chapter 3.