Chapter 3 — Department of Artificial Resources

Objectives

When we’re managing a team of people, one of our main tasks is to define and communicate clear objectives.

We establish goals and make sure that each team member understands what needs to be achieved.

But how do we do this?

Usually, through conversations, meetings, written documents, among other means.

The key point here is communication.

We need to be able to articulate these objectives in a way that is clear and understandable to everyone.

Now, when it comes to AI Agents, the process of defining objectives is similar, but with some important differences.

With AI Agents, you also need to make sure they “understood” the objective.

But how do you do that?

After all, you can’t simply sit down and have a conversation with an AI Agent like you would with a team member.

Actually, you define the objective of an AI Agent at the moment you choose it for a specific activity.

Depending on the structure you’re using to build that agent, there will be a specific place where you define that objective.

For example, if you’re creating an AI Agent to do research, you need to define that as its objective.

If you want an AI Agent that writes good emails, that’s the objective you should set.

If your AI Agent’s objective is to satisfy customers in customer support, or to do analyses based on knowledge from a specific area of the company, you need to make that clear.

But how do you make sure that the AI Agent really “understood” these objectives?

Here are two tips that work for both humans and AI Agents:

  1. Redundancy and Clarity: When you pass an objective to an AI Agent, try to explain that objective in more than one way.

Be redundant.

This will help eliminate ambiguities and ensure that the objective is clear.

  1. Accompanied Test: Follow the task being performed for the first time.

This will give you a sense of whether the AI Agent understood the objective correctly, or if it got confused and is understanding the objective as something else.

The clear definition of objectives is crucial, for both humans and AI Agents.

Without a well-defined objective, both can get lost, confused, or work in wrong directions.

I’ll tell you a story from my college days to illustrate the importance of defining clear objectives, especially when it comes to AIs.

At that time, a team was training a robot with a small artificial intelligence brain, much more primitive than today’s brains, to navigate through a maze.

The objective the team wanted for the robot was to reach the end of the maze.

But because they were dealing with simpler brains, they couldn’t simply write that objective in Portuguese.

They had to pass that objective in a more direct way, since they were dealing with less complex AIs, which didn’t understand natural language like today’s LLMs.

Their first attempt was to tell the robot that its objective was to not hit the wall.

They thought that, if it didn’t hit the wall, eventually it would leave the maze.

But what happened?

The robot didn’t move!

It understood that not moving was the best way to not hit the wall.

In the second attempt, they told the robot: “You are required to walk and cannot hit.”

They thought that, if it had to walk and couldn’t hit, eventually it would leave the maze.

But, again, it went wrong.

The robot began to walk around itself.

That way, it didn’t hit any wall, but it also didn’t leave the maze.

Only when the team managed to explain that the objective was to walk, without hitting, and without repeating movements, did it manage to complete the task.

Today, with an LLM brain, it would probably understand better.

But the lesson remains: the more explained and clear the objective, the better.

It is our responsibility, as managers of AIs, to ensure that we are conveying these objectives in the most clear and unambiguous way possible.

If we’re not specific and explicit enough, they may interpret our objectives in unexpected and even counterintuitive ways.

This problem of clarity in objective communication is illustrated exaggeratedly in jokes about lamp genies.

When the protagonist asks for “a million” and receives “big corn,” it’s evident the failure in communication.

It would have been better overcome if the protagonist had said “Genie, I want a million dollars, because that will be important for me to spend it, to buy things”.

By giving more details and explaining the purpose behind the request, he would have been more specific and redundant, reducing the margin for misinterpretations.

The same principle applies when we’re defining objectives for AI Agents.

The more details, context, and clarity about purpose we provide, the less chance there is for misinterpretations.

This might mean not just describing the final result we want, but also specifying why we want it and what we don’t want.

And this exercise of communicating objectives better is not just valid for AIs, but also for human teams.

How many times have we seen work teams spend weeks or months on a project, only to discover at the end that the result wasn’t quite what the boss or client wanted?

Often, this happens because the objectives were not communicated with clarity, specificity, and purpose from the start.

Therefore, whether defining objectives for AIs or humans, we should always ask ourselves:

  • Am I being clear and specific enough?
  • Is there any ambiguity in the way I’m defining the objective?
  • Am I providing enough context, details, and clarity about purpose to avoid misinterpretations?
  • Is there any undesired behavior that I should explicitly rule out?

By answering these questions, we can ensure that we’re defining objectives in the most effective way possible, whether for our AIs or our human teams.

But remember, clarity in objectives is only part of the equation.

To really motivate our AIs (and our teams) to achieve these objectives, we also need to understand what drives them.

And that’s what I’m going to talk about in the next topic.


→ Next: 3.1.2 Motivation

↑ Contents