Chapter 3 — Department of Artificial Resources

Attention, Context, and Depth of Thinking

In the movie The Social Network, there is a scene where the character of Mark Zuckerberg is questioned by a lawyer about whether he was paying attention in the conversation.

He responds something like: “Yes, I was paying attention, but not 100%. Just enough for this conversation.”

This scene illustrates something we all experience in our daily lives: not every task requires the maximum of our mental capacity.

There are simple tasks that we can solve with just a part of our attention, and complex tasks that require total focus.

Here is where a difference already begins, as AI always pays attention, but there is a different parameter.

In the case of humans, there is an additional factor: we get distracted.

We switch tasks, we interrupt what we are doing, and this context switching has a cost.

AI does not get distracted, it does not switch between tasks on its own.

And task switching for AI does not have the same cost of attention switching that it does for humans.

It is estimated that a human being can take several minutes to regain full focus after an interruption.

It is not just a matter of attention, it is a matter of continuity of thought.

When you give a task to the AI, it is, at that moment, using all available capacity.

But this does not mean that every “thought” of the AI has the same depth.

We can think of two types of AI thinking:

  1. Shallow thoughts: quick responses, little elaboration, superficial analysis.
  2. Deep thoughts: detailed responses, well-structured, with greater development.

What determines this depth is not only the intelligence of the AI model, but also two factors: how much information the AI can consider and how much space it has to develop reasoning.

This is where two important concepts come in: context and tokens.

AI works with a “context window”, which is everything that fits in the AI’s “head” at that moment: the question you asked, the instructions you gave, the conversation history, the documents you provided.

All of this takes up space, and that space is limited.

This limit is measured in tokens, which are the pieces of text that AI uses to process information.

That is, tokens represent the available space, while context is how that space is used.

Now, the most important point for you as a manager: when you use AI, you are not just choosing intelligence.

You are also, in practice, deciding how much space that AI will have to “think” and how much it will develop the answer.

It is as if you were buying “thoughts”.

You can do many simple tasks, with quick answers, or invest in deeper, more elaborate answers.

Let us think of two examples:

  1. A simple task: classify whether a customer feedback is positive or negative.

This is a low-complexity task.

In this case, you do not need an extremely intelligent model, nor much context, nor long answers.

  1. A complex task: analyze a legal, financial, or operational strategy.

Here you need more context, more elaboration, more space for reasoning.

The depth needs to match the complexity of the problem.

However, even for complex tasks, the best solution is not always to use a single very intelligent agent with a lot of attention.

In many cases, it may be more efficient to divide the task among several agents, assembling multidisciplinary AI teams, as we will see later in this book.

It is that idea that two heads (or several) think better than one.

Here comes an important strategic point.

Just as you do not hire only doctors for all functions of your company, you also do not always need to use the most expensive intelligence for all tasks.

Many times, you can use simpler intelligences, with less context, for specific tasks, and reserve more advanced intelligences, with more context and more elaboration, for critical tasks.

This is a game of optimization.

You are balancing cost, quality, and speed, and this varies greatly depending on the process.

In recurring tasks, you can test, measure, and optimize.

You can discover which level of intelligence to use, how much context to provide, how much response development is necessary, and adjust this over time.

There is no single rule.

It is practice, testing, and adjusting.

Another important point: even the most expensive AIs, in many situations, are still extremely cheap when compared to human costs.

But there are also tasks where AI does not yet replace humans.

For this reason, more important than trying to find a fixed rule is to understand the mental model.

You are working with intelligence, context, and depth of thought, and your role as a manager is to orchestrate these elements in the most efficient way possible.

Think of this as managing water flow.

You have different qualities of water (intelligence levels) and different quantities of water (tokens, space for thinking).

Your mission is to direct the right water, in the right quantity, for each task.

Sometimes, you will need a large volume of medium-quality water (many tokens of a medium AI).

Other times, you will need little water, but of high purity (few tokens of an advanced AI).

And in some cases, you will need a lot of high-quality water (many tokens of an advanced AI).

The secret lies in understanding the needs of each task and allocating resources strategically.


→ Next: 3.1.9 Consistency

↑ Contents