Chapter 3 — Department of Artificial Resources

Failures and errors

There’s a famous saying that says to err is human.

Now that we’re beginning to replicate parts of human reasoning in electronic components, maybe it’s time to update that saying.

To err is human. And so is AI.

Artificial intelligence makes mistakes. And it will keep making them.

Just like people err, AI agents also err. This is not a detail. This is part of the nature of the system.

The important point here is that the mature manager doesn’t work from the fantasy of finding someone who never makes mistakes.

No experienced manager builds a team expecting absolute perfection from every employee. Management work was never about that.

Management work is about dealing with the imperfect. It’s about extracting results from imperfect systems. It’s about reducing the chance of error, minimizing the impact of error, and creating mechanisms so certain errors don’t pass through.

With people, we do this all the time.

We create processes. We create reviews. We create checklists. We create redundancies. We create dual approvals. We create validations for more critical tasks.

In low-risk activities, we accept a larger margin for error. In high-risk activities, we increase control, supervision, and verification.

That’s how we deal with interns, analysts, coordinators, managers, and directors. Not because they’re incapable, but because all human work carries potential failure.

With AI, the logic is the same.

You don’t need an AI that never makes mistakes. You need to understand how it makes mistakes, where it makes mistakes, how often it makes mistakes, and what the cost of that error is.

From there, you build the system around it.

In some cases, this means letting the AI work alone. In others, it means putting an automatic validation after the response. In others, it means putting a person reviewing it. In still others, it means simply not using AI in that step.

That’s the central point.

The problem is not that the AI makes mistakes. The problem is building a system that doesn’t know how to deal with the AI’s error.

Just as a manager reviews an intern’s work on more delicate tasks, you can also create review layers for agents.

This review can be:

  • An automation
  • Another agent
  • A system rule
  • Or a human

It all depends on the risk involved.

The more critical the error, the less you rely on a single layer. The more cheap and reversible the error, the more autonomy you can give.

Managing people is managing the imperfect. Managing AI is too.

The logic doesn’t change. What changes is the nature of the imperfection.

In the human being, error can come from distraction, tiredness, ego, hurrying, fear, disorganization, or lack of knowledge. In AI, error can come from lack of context, biased pattern, bad interpretation, model limitation, or simply incorrect generation.

But in both cases, the management response is similar.

You don’t combat error with hope. You combat error with process.


→ Next: 3.1.16 Honesty, loyalty, and betrayal

↑ Contents