Biases and prejudices
Humans don’t see the world in a neutral way.
In different cultures, different physical characteristics have been seen as signs of beauty, status, or value.
In the Bodi tribe in Ethiopia, there’s a ritual called Ka’el. In it, men compete to achieve the largest possible body mass.
For months, they follow a specific diet to gain weight. The bigger the belly, the greater the prestige.
There, what in many cultures would be seen as excess is seen as beauty and status.
Between the Kayan Lahwi people of Southeast Asia, there’s another standard.
Women use metal rings around their necks from a young age. Over time, these rings create the appearance of a longer neck.
This elongation is associated with beauty and cultural identity.
Another historical example comes from China, with the practice of Foot Binding.
Girls had their feet bound as infants to prevent natural growth. The goal was to keep feet small, considered more delicate and attractive.
This caused permanent deformations. But, within that context, it was seen as beauty.
The point here is simple.
These standards don’t arise out of nowhere. They are taught, repeated, and reinforced over time.
A child who grows up in these contexts learns to see it as natural.
That’s how biases are formed.
With AI agents, the logic is similar.
AI agents learn from data produced by humans. That is, they learn about the world from content that already carries human interpretations, distortions, and patterns.
That’s why AI agents also carry biases.
These biases don’t come from feelings. AI Agents don’t feel, don’t prefer, and don’t choose.
But they reproduce learned patterns.
And these patterns can be biased.
The lesson here is not to try to completely eliminate biases.
This doesn’t happen even with humans.
When you hire an employee, you can’t guarantee that he doesn’t have any bias. What you do is create a system where these biases can’t manifest.
You define culture. You define rules. You define clear limits on behavior.
For example, you don’t completely control what a person thinks. But you can make it clear that certain behaviors, like racism, are not acceptable within the organization.
That is, you don’t trust the absence of bias in the individual. You trust the system that prevents that bias from having an impact.
With AI agents, the logic is the same.
You don’t need to guarantee that the agent is unbiased. You need to build a system immune to biases.