The AI Optimist
The AI Optimist
💭4 AI Bias Busters: Promising Approaches to Eliminate Unfairness
0:00
-26:46

💭4 AI Bias Busters: Promising Approaches to Eliminate Unfairness

Ever wonder when you ask ChatGPT a simple question and get an answer that makes you think, “Wait a minute, where did that come from? The answer is simple: from bias masquerading as truth. EP #9

Listen on APPLE

💡Bias is a trickster of truth and the source of so many challenges in AI.

Falling in love with ChatGPT, I search for sources about “flow cytometry” to share and impress some bioscientists I work with who haven’t yet joined my AI rapture.

As the answer comes in before I call, I check the sources. Everyone fake. Not one truth. The scientists could have been more impressed. Snap me out of my AI rapture!

Humans data school AI systems and humans are flawed. That’s why focusing on AI fairness and ethics isn’t about philosophy. It’s about the practical impact of AI on society, business, and you.

Today, the AI community embraces the challenge of creative and effective ways to develop unbiased AI. Though biases, known and unknown, influence the content foundations of data.

Let’s dive into the most promising practices, making AI better, fairer, and more in tune with the diverse world they serve.

Begin with Diverse Data Collection

4 Solutions to Removing AI Bias

Key Elements to Remove Bias and Create Better AI

The Basic Need – Business Example to Give AI Diverse and Typical Training Data

The first way to tackle bias is like cultivating a well-balanced garden: diversity is vital. Why grow tomatoes when you can plant a variety of lettuces, herbs, and fruits that appeal to more people?

Imagine training an AI to understand business customer sentiment only using data from a single demographic. You’d be missing out on the wide variety of customers’ experiences.

Instead of training your AI on a narrow dataset representing just a sliver of your customer base, aim for a broader, inclusive range.

A diverse dataset translates to an AI understanding and responding to a broader range of human experiences. That’s good for customers and great for business.

Getting this balance isn’t easy. It requires attention to data training and tweaking so you’re not unwittingly introducing bias into your model.

Bias in business can lead to inaccuracies and legal problems – remember the lawyers who trusted ChatGPT with their submitted brief, only to find the sources it shared didn’t exist?

They lost their right to practice law because they didn’t check the facts. That’s what bias can do to your results.

AI leaders are exploring solutions focusing on fairness ethics and ensuring we understand how and why AI comes up with the answers.

Begin with Diverse Data Collection

  • How: Make sure the data you’re using represents different kinds of people and viewpoints.

  • Example: for a facial recognition system, include faces from various ethnic backgrounds, ages, and genders.

Carefully select the data used to train the AI model so it represents a wide range of scenarios, views, and people.

Be sure to include data from underrepresented groups and adjust to remove skews toward dominating groups. 

Make the training data as inclusive as possible to make the AI model unbiased. Garbage in, garbage out is what causes AI bias and poor results.

Advantages:

  1. Wider Audience: A model trained on diverse data applies to more people.

  2. Fairness: Make sure that all groups are represented.

  3. Ethical: Diversity tends to lead to moral outcomes.

Disadvantages:

  1. Data scarcity: For some groups, diverse and large-scale data might not be available.

  2. Imbalance: It’s not easy to achieve balanced data, especially with embedded societal bias.

  3. The subtlety of Bias: Bias can be tricky, embedded in data, and often in people evaluating it.

The Simple Method to Remove Bias is Not Good

A few AI models tried ignoring variables like race, gender, or age when training to improve. 

It’s like saying, “Let’s not see color; let’s treat everyone equally.” While this sounds fair, it doesn’t work because bias isn’t a category. It’s a fact of life in language and actions.

This approach ironically extends the biases we are trying to eliminate.

You risk ignoring systemic issues that do affect people differently. That’s not just bad ethics—it’s also bad business.

Taking the time to develop a balanced and diverse dataset is smart. It’s about accepting that our world has different and equally important perspectives. The challenge is not just technical; it’s a business strategy and an ethical commitment.

By treating it as such, you create an AI tool more in tune with the complex world it serves.

4 Solutions to Removing AI Bias

1. Fairness-aware Frameworks

Creating AI models for fairness to check and balance AI training is becoming a standard practice at Open AI, Meta, and other companies. Creating a set of rules that provide an almost human sense of right and wrong, what’s fair and what’s not, with a sense of compassion:

“they will have emotions, they will have empathy,

they will be all the things we require entities in the world to be if we want them to behave properly.”

Yann LeCun, Meta VP & Chief AI Scientist from the

Munk Debate on Artificial Intelligence - see video below

Setting specific fairness objectives optimizes for both predictive accuracy and fairness. The result is a model designed to ease bias in its predictions.

Fairness models are complex, expensive to train, and require customization to meet fairness goals.

  • Preprocessing Data for Fairness

Preprocessing involves curating the data before feeding it into the algorithm. By identifying and addressing biases in the data, the resulting balanced dataset helps generate fair outcomes.

However, this method requires a deep understanding of the potential biases and often assumes that unbiased data is available, which is only sometimes valid.

  • Ensemble Learning: Combining Multiple Outputs

Ensemble methods involve training multiple models and combining their outputs for a more balanced and accurate prediction.

  • Pros

    • Balancing out individual model biases.

    • Often improves overall model performance.

  • Cons

    • Computationally expensive.

    • Adds complexity in model management and interpretation.

Examples & Possible Uses

In a language translation service, ensemble learning can aggregate output focusing on different dialects or styles, offering a nuanced translation and reducing bias towards any particular form of the language.

  • Adversarial Training for Fairness

Adversarial training leverages oppositional networks to detect and counteract biases. 

One network tries to make fair predictions, while an adversary tries to find biases. The primary model continuously adjusts to counter the adversary. 

Once again, the cost of fairness is a computationally intensive and complex process.

  • Transfer Learning for Fairness

Transfer learning involves models pre-trained on a large, diverse dataset. These models are then fine-tuned for applications.  

Existing biases in the pre-trained models might transfer to the new application, and this approach is also limited to where fair pre-trained models are available.

2. Reviewing AI - Audits, Analysis, & Explainability

Explainability: Unlocking the “Black Box”

Explainability seeks to make the decision-making process transparent. It identifies which features the model gives weight to, helping debug and modify the model.

  • Pros

    • It provides a lens to inspect and correct model biases.

    • Useful for compliance with fairness criteria and regulations.

  • Cons

    • Achieving explainability in complex models is challenging.

    • It may compromise model performance for the sake of transparency.

Content Recommendation Example

In content recommendation algorithms, explainability ensures the model doesn’t prefer content based on controversial or sensitive features like political affiliation or gender. 

  • Post-hoc Analysis and Adjustments

These techniques analyze a model’s decisions to identify and correct bias. They adjust the model’s outcomes rather than the model or training data. 

The benefits are quick adaptations to already deployed models. But this approach solves the problem after the fact and might introduce other errors.

  • Audit Algorithms

Audit algorithms inspect model outcomes for signs of bias. Audits run in real time, flagging biased predictions for manual review or automated adjustment. 

Excellent for monitoring, it does not remove the underlying bias and often requires manual adjustments, which takes more time.

3. Interpretability: Navigating the Complex Landscape of Bias

AI models make decisions based on patterns and relationships in massive amounts of data. 

We’re often unsure how the outcomes happen, making AI decisions harder to interpret and assess for fairness.

Researchers are actively exploring attention mapping and rule extraction methods to show how these models arrive at their outputs.

Rule extraction is a technique to understand the underlying decision-making process by estimating its behavior with human-readable rules. 

The idea is to take a trained, complex model that performs well on a particular task and guess its behavior using a more straightforward, rule-based model.

Here are some standard techniques for rule extraction:

  1. Decision Trees and Random Forests: In decision tree-based models like Random Forests, the rules are directly extracted from the paths in the trees.

  2. Sensitivity Analysis: By studying how small changes in input features affect the output, creating rules capturing the model’s decision boundaries.

  3. Neural Network Rule Extraction: Specialized techniques for extracting rules from neural networks, like breaking down the network into a more interpretable structure and translating that structure into rules.

Benefits of Rule Extraction:

  1. Interpretability: The extracted rules make understanding why a model makes a specific prediction easier.

  2. Debugging: Understanding the rules helps identify biases or errors.

  3. Compliance: In some industries, interpretability is a legal requirement.

Rule extraction is not always perfect and may result in a loss of accuracy since the simpler model approximates the more complex one.

Attention mapping is a way to visualize and interpret the model’s decisions. 

The attention mechanism within these models weighs the importance of different parts of the input when generating output. Useful in language translation, question-answering, summarization, and other tasks.

Example: Translating the sentence “The cat sat on the mat,” an attention map might show that the word “sat” is strongly associated with “cat” and “mat.” The model pays attention to these relationships to understand the context.

Attention maps are helpful for several purposes:

  1. Debugging: Understanding where the model is paying attention helps diagnose issues in the model.

  2. Interpretability: Attention maps provide insight into the decision-making process.

  3. Feature Importance: Attention maps help understand what features (words, tokens, etc.) are essential for a particular task.

Attention is just one part of a model’s architecture, and focusing solely on it might not capture all the degrees of the model’s behavior.

4. Ethics and Bias-Free AI:

The Quest for Ethical Algos and Fair Outcomes

Can I line up the jobs to be done, the safe way to do them with automated governance and compliance with the laws that are emerging, to get the job done?

Ramsay Brown CEO, Mission Control

We want AI to understand and respect all kinds of people. By doing this, individuals will trust AI more, whether buying something online or getting medical advice.

  • Think of an ethical guide as a checklist for building AI that does good, not harm. Helping everyone on a team know what to focus on, from the people who collect data to those who write the code.

When we build AI the right way, it’s not just about avoiding mistakes. We’re creating technology that makes life better.

Ethical AI fits anywhere—it works well in different countries and respects people’s diverse views and needs.

Ethical frameworks in AI aim to create algorithms that interact fairly, impartially, and reflect diverse viewpoints.

The framework outlines the principles guiding data collection and model training. AI then becomes an efficient tool serving the values and norms of society.

Infusing ethics into AI is a hands-on job and involves multiple steps:

Ethical Guidelines

  • How: Create a set of rules or principles that your team agrees to follow.

  • Example: Before starting a project, agree that they will not use AI for harmful purposes like discrimination or surveillance.

Bias Audits

  • How: Check AI models to see if they’re making biased decisions.

  • Example: For AI sorting job applications, make sure it’s not favoring certain groups.

Transparency

  • How: Make it clear how the AI is making decisions.

  • Example: If AI is recommending loan approvals, show how the AI came to its conclusion.

Ethical Review

  • How: A team of diverse experts review and approve AI projects.

  • Example: Before launching a health diagnostic AI, review it by a panel including medical experts, ethicists, and patients.

Privacy Protections

  • How: Build features that protect user data and privacy.

  • Example: When creating a voice-activated assistant, allow users to delete their conversation history.

By actively integrating ethical considerations into each step of AI development, we’re more likely to create intelligent and fair systems.

Key Elements to Remove Bias and Create Better AI

At this point, cynical minds arise, saying this sounds good, but can we do it?

Remember, we’re the teachers here; AI is learning from us. The responsibility to ensure it’s fair and unbiased is up to us. Technology serves humanity if we set the rules and follow ethical guidelines.

And the best part? We’ve got the tools to do it. We’ve discussed some promising steps already scrubbing out those biases.

These aren’t pie-in-the-sky ideas; they’re practical, actionable ways to guide our tech to serve all of us, not just some of us.

We can shape the AI world into a more ethical and inclusive place.

Because the fear of AI, at its heart, is the fear of what people would do with this power. 

Humbled by the threat, we imagine a world where these threats don’t happen because we have rules to check the problems and provide accurate and helpful solutions.

AI Bias is a symptom of our problem, and by working on eliminating this bias, we get to know ourselves a little better and become aware that AI Bias is a reflection of our society.

One we can change with a bit of help from AI and international cooperation. It is possible.

The basics are simple:

  1. Accept responsibility as a society. The bias AI shows us is a mirror. Change the reflection.

  2. Diverse Training Data: Make sure training data represents different groups.

  3. Audits: Third-party audits help identify biases in AI.

  4. Fairness-aware Algorithms: Develop algorithms sensitive to fairness rules.

  5. Transparency and Interpretability: Making algorithms more understandable helps identify how biases form.

  6. Ethical Guidelines: Establish a framework for ethical AI leading to unbiased systems.

  7. Human-in-the-Loop: Involving humans in AI decisions adds a layer of valuable perspectives.

  8. Legal Frameworks: Hold organizations liable for biased outcomes.

  9. Diversity in AI Development: Diverse development teams help reduce biases.

  10. Global Standards: As AI becomes global, international fairness standards arise. Follow them.

Dive Deeper

What can we do to ensure the decisions made by machines do not discriminate, are transparent, and preserve privacy?

0 Comments
The AI Optimist
The AI Optimist
Moving beyond AI hype, The AI Optimist explores how we can use AI to our advantage, how not to be left behind, and what's essential for business and education going forward.
Each week for one year I’m exploring the possibilities of AI, against the drawbacks. Diving into regulations and the top 10 questions posed by AI Pessimists, I’m not here to prove I’m right. The purpose here is to engage in discussions with both sides, hear out what we fear and what we hope for, and help design AI models that benefit us all.
.