Unmasking Racism in AI: From Gemini’s Overcorrection to AAVE Bias and Ethical Considerations

By: Toni Oppenheim

In the ever-evolving landscape of artificial intelligence (AI), recent incidents like Google’s Gemini tool have brought to light the challenges surrounding balanced representation and historical accuracy.1 As we navigate this terrain, it’s imperative to recognize the broader implications and the need for nuanced solutions.

Gemini recently faced criticism for its overcorrection. In an attempt to promote inclusivity, their efforts went too far and actually generated images that deviated from history.2 For example, images were recreated to replicate Nazi soldiers and the founding fathers but instead of accurate imagery, they used people of color to promote inclusivity.3 Many individuals believe that this is an attempt at correction but resulted in an overcorrection to the long standing racial bias issues embedded in AI. This incident underscores the delicate balance between representation and fidelity to historical records. While diversity is essential, it must not come at the expense of erasing or distorting historical truths.4

Google’s decision to suspend Gemini’s image-generation feature temporarily reflects a commitment to recalibrating AI tools to better align with historical accuracy and user expectations. It’s a step toward acknowledging the need for responsible AI development while navigating the complexities of representing diverse identities and histories.

Unfortunately, the Gemini situation is not the first within the tech industry has grappled with bias and representation in AI. Past incidents, such as mislabeling photos and generating stereotypical images, highlight the ongoing challenge of developing AI that reflects the complexities of human diversity without perpetuating stereotypes or erasing cultural nuances.

Jennifer Hoffman recently conducted a study revealing discrimination against speakers of African American Vernacular English (AAVE) in AI tools, shedding light on concerning patterns of hiring discrimination.5 As the study elucidates, AI models tend to label AAVE speakers as “stupid” or “lazy” during job screenings, often recommending them for lower-paid positions. 6 This discriminatory bias not only perpetuates systemic inequalities but also has real-world consequences for job candidates who may face barriers to employment based on their dialect usage. 7 For instance, individuals who code-switch between AAVE and standard English in their online presence could be unfairly penalized by AI models, potentially impacting their job prospects.8 Moreover, the study found that AI models were more likely to recommend harsher penalties for hypothetical criminal defendants using AAVE in court statements, reflecting the pervasive nature of racial bias in AI technologies.9 This underscores the urgency for developers to address and mitigate these biases to ensure fair and equitable hiring practices.10

The controversy surrounding Gemini and AAVE underscore the broader ethical considerations in AI development. Balancing representation with historical accuracy requires collaboration with historians, ethicists, and diverse communities to ensure that AI tools respect the complexities of human identity and history.11

Moving forward, tech companies must prioritize ongoing dialogue, collaboration, and transparency in AI development. By engaging with diverse perspectives and considering the societal implications of their work, they can lead the way in creating AI that enriches our understanding of the world while respecting its intricacies.12


1.  Kat Tenbarge, Google making changes after Gemini AI portrayed people of color inaccurately,  Feb. 22, 2024,  https://www.nbcnews.com/tech/tech-news/google-making-changes-gemini-ai-portrayed-people-color-inaccurately-rcna140007

2.  Jeff Raikes, AI Can Be Racist: Let’s Make Sure It Works For Everyone, April 21, 2023 https://www.forbes.com/sites/jeffraikes/2023/04/21/ai-can-be-racist-lets-make-sure-it-works-for-everyone/?sh=30bbfceb2e40

3.  Adi Roertson, Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis, Feb. 21, 2024 https://www.theverge.com/2024/2/21/24079371/google-ai-gemini-generative-inaccurate-historical

4.  Id.

5.  Divyanshi Sharma,AI tools are becoming more racist as the technology improves, new study suggests,  March 18, 2024,

6.  Id.

7.  Id. 

8.  Id.

9.  Id.

10.  Id.

11.  A statement on artificial intelligenc, https://historycommunication.com/wp-content/uploads/2023/06/HCI_A.I.-Statement.pdf

12.  Id.

Leave a Reply

Your email address will not be published. Required fields are marked *