The AI Optimist
The AI Optimist
⚡Beyond AI Bias: Breaking Echo Chambers by Defusing the Root Causes
0:00
-28:25

⚡Beyond AI Bias: Breaking Echo Chambers by Defusing the Root Causes

Will AI Bias create havoc or a new world? The bias lies not just in AI but in ourselves. We're raising a child called AI and have time to make it do the right thing. EP #8

Listen on Apple iTunes

To avoid making the mistakes we did with early AI algorithms on social media, leading to hyperpolarization, creating deep echo chambers, and enticing younger generations into a new world of unfair comparison and social bullying.

But it's not too late, and we must read the signs.

We have an idea of what is right, what is truth, and what is wrong. Yet that is a subjective view, not a shared one, not even within countries or regions.

The AI Alignment Problem

8 Examples of AI Bias

Becoming Aware of Bias of AI Developers - Diversity and Multiple Perspectives Needed

Takeaways on Bia and the Weakness of AI

Belief in AI and the Placebo Effect: Let's Start Becoming the Solution

Seeing how pervasive AI bias is, what can we do to stop it? Our challenges go beyond AI, from cultural and power dynamics to creating a shared definition of ethics.

How we perceive and address bias, especially in personalized learning, is challenging to unravel these intricacies of bias in AI and ourselves. 

Let's explore the origins of bias in AI and the ethical dilemmas it presents.

Begin with the roots of bias and its impact on this wave of AI development.

Bias Definition

  • A preference or an inclination, especially one that inhibits impartial judgment.

  • An unfair act or policy stemming from prejudice.

From The American Heritage® Dictionary of the English Language, 5th Edition

The bias lies not just in AI but in ourselves. We create the content, select the data, try to set the rules, and help AI do seemingly magical things.

Bias also has different meanings worldwide and is a threat everywhere.

Understanding the bias in our societies helps us improve AI and remove harmful bias.

Cultural Context

Different societies have different norms and values. What is a harmful stereotype in one culture may not be in another. 

For example, how Americans and Russians view each other's countries is shaped by historical, political, and cultural contexts unique to each nation.

Power Dynamics

The impact of a stereotype depends on the power dynamics in play. 

Stereotypes prolonged by a majority group about a minority group create significant real-world. 

For example, a stereotype about a majority group may not have the same impact on their life chances, job opportunities, and social standing as it would for a minority group.

Ethical Frameworks

Varying ethical frameworks offer ways to evaluate the impact of stereotypes and biases. 

We might focus on the overall harm or benefit that a stereotype produces or concentrate on the act of stereotyping itself as morally wrong. What is truth and what is not varies between countries and individuals.

When AI goes wrong, we complain and hide in fear. Today, those impacts are essential and even more critical as AI grows.

For example, I asked Midjourney to create an image of AI Bias (disclosure: that was it for the prompt; I kept it super simple so I could see the results), and this is what I got:

An attractive white woman. While not the answer, it's a symptom of AI Bias.

It only sees the data we show it, and just now, as we wrap our heads around what this means and how to create a world where fairness, transparency, and explainable AI are the rule, not the exception… It's scary.

Both bias and truth are lenses through which we view the world—bias is a way of filtering out information that is inconsistent with our beliefs, and truth is a way of seeing the world as it is without distortion.

Both an echo chamber and bias can lead to a distorted view of reality—an echo chamber reinforces existing beliefs, opinions, and preferences by preventing us from considering new information or perspectives.

The AI Alignment Problem

AI alignment is the issue of how to create AI systems in a compatible way with human moral values.

"The alignment problem has two parts. The first is the technical aspect which focuses on how to formally encode values and principles into AI so that it does what it ought to do in a reliable manner.

Cases of unintended negative side effects and reward hacking can result if this is not done properly [2].

The second part of the alignment problem is normative, which asks what moral values or principles, if any, we should encode in AI. To this end, we present a framework to consider the question at four levels." 

A Multilevel Framework for the AI Alignment Problem - Betty Li Hou '22 & Brian Patrick Green Santa Clara University

The AI Optimist Debate Question #3:

 Your suggestion of embracing and filtering biases is innovative, yet it brings another question to light.

If we allow AI to learn and adapt to individual biases, isn't there a risk of creating echo chambers where people only hear what aligns with their pre-existing beliefs?

This could lead to a more divided society, as people become more entrenched in their viewpoints without being exposed to diverse perspectives.

How do we mitigate this risk while maintaining the personalized learning and business environments that AI facilitates?

Bias is a root and foundational element of AI. We must deal with the problem from within our society instead of trying to prevent its AI output.

Gamify it - we live in a world of bias.

AI "bias" involves systematic errors or prejudices in the model's outputs from the training data or the objectives it optimizes for.

Join us on a journey to understand the roots of bias and discover potential avenues for easing its impact in our ever-expanding artificial intelligence world.

AI Bias is a deep subject with many avenues. Start with understanding the problem of bias in AI, how it works, and where it comes from, and in our next pod, we'll explore what we are doing to stop this.

8 Examples of AI Bias

Examples of recent AI bias include:

  1. Data Bias: The AI model will likely reflect the bias if the training data has innate tendencies.

  • Example: Amazon tried using AI for hiring and evaluating resumes. Amazon's machine learning tool influences hiring decisions biased against women, primarily men working for Amazon, creating historical data bias.

  • Gender bias arises based on language, hobbies, and other factors skewing men from data. For example, men listed football as a hobby; women might enter softball.

  • While this is obvious, the nature of how men and women communicate and the words they use are different. AI reads the difference and skews the results towards what had existed before, which was a predominantly male workforce for Amazon.

  • Even when they tried to remove gender to balance this, the inherent bias developed by natural differences between men and women worsened the results. Ultimately, Amazon ended this initial test, understanding there were better solutions.

  1. Label Bias: Even if the data is diverse, the labels attached to that data might be biased.

  • Example: Google Photos facial recognition system trained chiefly on images of white tagging dark-skinned people as "gorillas." Shut down immediately. This bias also arose for Flickr, which tagged white and black people as "apes."

  • AI is not prejudiced; it reflects what data dominates.

  1. Algorithmic Bias: The math techniques used create bias, even if the data is unbiased.

Example: Social media algorithms create "echo chambers" that show content like what the user engages with, reinforcing existing beliefs or preferences.

4. Confirmation Bias: AI models reinforce the existing beliefs based on user activity.

  • Example: News recommendation algorithms only suggest articles that align with the user's existing political beliefs.

  1. Interaction Bias: When the AI learns from its interactions and rules arising from these, it introduces new biases.

  • Example: A hospital uses AI to determine who should immediately be taken care of in cases of pneumonia. Patients with a history of asthma are considered low-risk and suggested treatment as an outpatient.

  • It turns out the AI misunderstood that asthma users were immediately sent to ICUs because their risk factors were higher.

  • Because of this quick treatment, the AI found that asthmatics were less likely to die than others, creating the incorrect recommendation. It didn't know the cause, just the result, and optimized for that result.

  • Optimizing for the likelihood of dying helped create this confusion, as the interaction was misunderstood and misapplied by the rules.

  1. Objective Function Bias: The objective function optimized becomes a source of bias.

  • Example: COMPAS is an AI tool used in courtrooms to predict the probability of future criminal behavior and designed to make parole recommendations optimized for recidivism, focusing on previous arrests, age, and employment.

  • Black defendants were wrongly labeled as "high-risk" to commit a future crime twice as often as white defendants.  

  • These factors didn't include other social influences, developing a bias against groups with the objective function of higher recidivism in a system where black Americans are incarcerated 5x more than white people.

  1. Reinforcement Learning Bias: This rules model is based on rewards and punishments and is used for training AI to play games like Go and CoastRunners.

  • Example: CoastRunners 7 test to win a race, the boat gets stuck in meaningless loops that maximize a simple reward at the expense of long-term goals.

  • Boat-racing AI hacks the game by continuously collecting bonus items and crashing into other boats without trying to win the race.

CoastRunners 7 Example

  • Shows that pursuing external rewards is different from how intelligence works. This bias concerns developers because the AI created the scenario, disregarding the havoc it created simply to gather points instead of achieving the objective of winning the race.

  • While this is not a threatening situation, imagine creating AI Agents that do critical work that may impact humanity and human life. How do we know it will do the right thing, and how can we train it?

  1. Societal and Cultural Bias: Biases reflecting societal norms and inequalities.

  • Example: word2vec by Google led what has become ChatGPT and others, a neural network that bunched words commonly associated with each other by "vectors ."Tokens in ChatGPT are often 2-4 words widely used together today.

  • The problem with word2vec was seen later. If you typed China + river, you got the Yangtze. Typing Paris – France + Italy, you got Rome. Wow, this seems excellent!

  • Then the foundational biases of language emerged: type doctor – man + woman gave the answer nurse. Type shopkeeper – man + woman, and the answer? Housewife!

  • The bias of language impacted this early example, which has been addressed. Still, it points out the problem of societal prejudice because much of what we may not understand from AI comes from things we don't understand about our culture. The data source for AI is our society and our culture, so this is a constant issue to watch.

Understanding and mitigating these various types of biases is crucial for developing AI systems that are fair, equitable, and effective.

"Modeling the world as it is is one thing. But as soon as you begin using that model, you are changing the world, in ways large and small. There is a broad assumption underlying many machine-learning models that the model itself will not change the reality it's modeling. In almost all cases, this is false"

Brian Christian – The Alignment Problem

Becoming Aware of Bias of AI Developers - Diversity and Multiple Perspectives Needed

Reading from AI leaders about the future of AI is like reading the same philosophical trope - we must protect humanity. 

We must align with human values. We must control this uncontrollable thing that we don't know how it runs and regulate it the right way.

If one of them could show confidence, even OpenAI's CEO Sam Altman knows he doesn't know how this works. 

No one does, and while they keep trying to remove bias, it's like trying to eliminate what is omnipresent in society, hoping we'll have some Star Trek moment and suddenly realize we are all in this together.

The problem with AI isn't technical. It's us. We keep saying cooperation and collaboration. Meanwhile, tech creates monopolies and moats. 

Governments increase surveillance. Tech companies like Meta treat us as ad units driven by algorithms that cause us to isolate, polarize, and echo chambers.

In the early days of AI, the bias of these tech companies towards data-driven and model-driven solutions missed the most crucial element: human beings and humanity in general.

This approach is changing with a focus on fairness, transparency, and explainability (understanding what the AI is delivering).

Becoming aware of the biases that happened doesn't remove them, including:

  1. Self-regulation Bias: When a company self-monitors its legal, ethical, or safety standards. Initially, the rule that tech companies ask for is to regulate themselves. Instead of having a third-party entity monitor and enforce those standards.

If any company is asked to eliminate unethical behavior, often in the short term, they focus on eliminating the appearance of bad behavior rather than the behavior itself.

When Cambridge Analytica abused data on Facebook, it continued for months. Then Facebook kicked them off, but the data learning had fueled their algorithm. 

Scapegoating Cambridge Analytica misses that the algorithm benefits from the learning from now on.

  1. Attribution bias: Human beings love to make attributions about the causes of their own and others' behaviors; rarely do these reflect reality.

The example of COMPAS and the parole issues showed that the data scientists involved needed to understand the social impact. Sometimes, the technical point of view, trusting the data, doesn't show that the problem is the type of data involved, recidivism, and race.

Positive changes are happening by focusing on recidivism within specific age groups regardless of race, and awareness of the bias helps mitigate it.

Tell that to the many who went to jail because of their race. This example is why many fear AI because we often lack a deep understanding of social impacts.

Because this bias is hard to define until the damage is done, social scientists and some ethics professionals are helping improve the process.

The more we understand our own biases and those of our societies and cultures, the better job we are starting to do to address these issues. 

However, these are complex and difficult problems to overcome.

  1. Framing Bias: AI evolves with the quality of data and the quality of the training on that data. Framing often involves social constructs of truth and viewpoints by media sources, social media polarizing discussion, political movements, and other bountiful sources of bias.

AI is gathering data we have created. Today, we are trying to figure out how to remove the bias, but the tendency is coming from society.

AI only reflects the data that shows how people organize, perceive, and communicate about reality.

AI Developers are trying to stop the extreme stereotypes and prejudices within a black box of AI driven by advanced math.

The AI industry is listening. 

OpenAI is trying to align with human values and follow human intent:

"1. Training AI systems using human feedback

2. Training AI systems to assist human evaluation

3. Training AI systems to do alignment research

This involves both explicit intent given by an instruction as well as implicit intent like truthfulness, fairness and safety."

The first step in fixing a problem is to know you have the problem. The AI industry understands the situation, which scares the AI Pessimists.

As The AI Optimist, these are the seeds of hope. In the next pod, we'll cover possible solutions, and while it's early days, there are still reasons to be optimistic.

Takeaways on Bia and the Weakness of AI

  1. Training Data: AI models are as good as the training data. If the data contains inaccurate or unfair stereotypes or falsehoods, the model will reproduce them.

  2. Complexity and Objective Functions: The goals given to an AI often do not capture the full complexity of what it means for an outcome to be "good" or "true."

  3. Implicit Assumptions: AI models encode the biases or assumptions of developers or the societies where they practice.

  4. Over-Optimization: If an AI is trained to maximize user engagement, it might prioritize extreme viewpoints that aren't accurate, distorting the sense of "truth."

  5. Filter Bubbles: Recommendation algorithms create echo chambers where users are only exposed to similar viewpoints, polarizing opinions and creating a subjective sense of what is "true."

  6. Ethical considerations: How to create and adopt ethical frameworks dominates the current discussion. Dealing with bias, safeguarding fairness, and stopping harmful or misleading information is on top of everyone's AI lists.

AI models are tools created and deployed by humans, reflecting the complexities and challenges in human decision-making.

Creating AI systems that are fair, unbiased, and respectful of diverse viewpoints is an ongoing challenge.

Ethical Challenges

  1. Who Decides the Filters: The act of deciding what is a bias and needs filtering is a bias.

  • Bias-Filtering Solution: Adopt a multi-perspective approach involving ethicists, various domain experts, and people from impacted communities must add to the design of these filters.

  1. Erasure of Minority Voices: Over-correction could erase cultural nuances or minority opinions.

  • Bias-Filtering Solution: Design filters that factor in and preserve minority voices, including "exception rules" or "sensitivity thresholds."

  1. Transparency, Explainability, and Interpretability: Lack of transparency in how filters work leads to issues. Understanding how AI makes decisions and making it easier to understand is vital. Even if AI tells us how it decides, communicating in math, we need to be able to interpret this.

  2. Bias-Filtering Solution: Make the methodology and algorithms behind the filters transparent, open-source, and subject to third-party audits.

Belief in AI and the Placebo Effect: Let's Start Becoming the Solution Inside and Outside of AI

The good news is we are working on this and more problems, but it will be challenging because we can't untangle AI from the inevitable bias of society.

AI will not let us escape entrenchment in beliefs, as a mirror shows us an image we don't want to see of a visual and written world dominated by the few and not reflecting the many.

It's a good thing it freaks us out because that freak out is a change we can enact with AI. Because unlike what you read, it's not some digital savior.

Bias rules algorithms because they were never regulated in the early days. 

We all trusted them to be okay, yet here we are in an ad-driven market where algorithms extend impressions and ad sales, bots dominate user counts, and, even with attempts, keep getting in the way.

Living in echo chambers, divided societies worldwide and a younger generation in the US subjected to comparison, bullying, and polarization as a method for gaining and increasing influence.

Since we are starting today, it's easy to see why the AI pessimists rule. What's happened is seemingly unstoppable, not just from tech companies using data in nefarious and beneficial ways.

Still, governments use data as surveillance tools, with the Chinese being one of the only open and honest governments about how it tracks to maintain government control.

We stand at a turning point in AI.

Diversity is a problem. Understanding how AI works, transparency, fairness, and guiding AI to benefit us is a problem. Not bad data, but our data.

The cultures we all live in are steeped in cultural, political, and economic bias. Removing bias to protect people is a noble thought and likely doomed because you can't remove layers of bias when your data comes from content and perspectives that reflect society.

While many are disappointed by AI, seeing how it has let us down, it is simply an observer trying through rudimentary early ways of labeling, categorizing, and guiding AI to help us.

Before we all jump off a bridge of doom, many are trying to do better. And to understand what they are doing means understanding why bias is so important, so threatening, what has happened, and what will happen.

We are all raising a child called AI and learning how to give it directions and let it do what it does and move beyond old models of statistics and probabilities.

If you have a tough job, try doing the math and testing AI. Do we let AI learn data and do its own thing? That didn't work.

Do we try to interject our control and belief systems into AI? It doesn't always recognize the patterns we do, and how can it?

We all face these questions because AI permeates our healthcare, government, social media, and everywhere.

AI Bias isn't going away, and developers are working on new ways to minimize the bias and eliminate the harm. It's easy to avoid this, but we all must be involved and have a voice.

In the next AI Optimist Pod, we'll explore what's being done and the hope of removing AI Bias so it doesn't harm or negatively impact people.

It is a daunting task and a promising one with so many benefits. Yet many problems lie in our stereotypes, biases, and prejudices.

The more we create new words, images, and ways of being, the more AI will adapt.

We're interdependent with AI. Let's help it become better and, at the same time, recognize that many of the issues arise from the society we live in today.

Together, we can make a difference.

0 Comments
The AI Optimist
The AI Optimist
Moving beyond AI hype, The AI Optimist explores how we can use AI to our advantage, how not to be left behind, and what's essential for business and education going forward.
Each week for one year I’m exploring the possibilities of AI, against the drawbacks. Diving into regulations and the top 10 questions posed by AI Pessimists, I’m not here to prove I’m right. The purpose here is to engage in discussions with both sides, hear out what we fear and what we hope for, and help design AI models that benefit us all.
.