Global Equality Collective

View Original

Bias in the Machine: AI, Smashing Stereotypes and What Schools Can Do

Words by the GEC Circle.

Luke Ramsden, GEC Champion at St Benedict’s College and GEC Circle Independent School Advisor, Dr Holly Powell-Jones, Founder of Online Media Law, Ryan Tannerbaum, GEC Head of AI and Data Technology  and Nic Ponsford, CEO and Founder of the GEC, Global Equality Collective

One of the great benefits of artificial intelligence (AI) is often said to be its supposed objectivity. Machines, it is argued, lack the prejudices that mar human decision-making. However, the reality of large language models (LLMs) tells a different story, one in which the biases entrenched in society are not only replicated but amplified. As schools prepare students for a world shaped by AI, they face a profound challenge: how to equip young minds to navigate the ethical pitfalls posed by these systems.

The Scale and Nature of the Problem

At the heart of this issue lies the data upon which AI models are trained. Large language models such as ChatGPT and image generators like Midjourney or DALL-E derive their understanding from vast amounts of online content. These datasets include everything from news articles and academic papers to social media posts and advertising. While this breadth ensures a high level of sophistication in generating human-like responses, it also means that AI systems inherit the biases embedded within these sources.

Much like television and film in the past, an AI system trained to generate images of professions can reproduce stereotypes about those roles. Studies have shown that prompts like "engineer" or "scientist" overwhelmingly result in male-dominated depictions, while terms such as "nurse" often yield female images. The Berkeley Haas Center for Equity, Gender and Leadership analysed 133 AI systems across different industries and found that about 44% of them showed gender bias, and 25% exhibited both gender and racial bias. These shocking discriminatory patterns are not incidental; they merely mirror the bias in our day to day media. Efforts to generate globally majority and culturally specific content often flatten complex identities into caricatures. A search for "Nigerian person" in a generative image tool might produce figures adorned with generic head ties and vibrant fabrics, but fail to reflect the country’s staggering diversity, which includes over 500 languages and 300 ethnic groups. Studies also highlight that disabled people may be presented in a toxic or negative way, more likely to be represented as 'lonely', 'sad', or 'passive' than able-bodied subjects. 

The implications are far-reaching. By reinforcing narrow narratives, AI tools risk entrenching further stereotypes that educators and social reformers have long fought to dismantle. Worse still, they can perpetuate harm. For instance, racial biases in facial recognition software have already led to wrongful arrests, while sexist algorithms have excluded qualified women from job opportunities. These are not abstract concerns but real consequences affecting individuals and communities.

As AI becomes ubiquitous in fields ranging from healthcare to education, the urgency of addressing these biases grows. The problem is compounded by the scale and accessibility of these tools. A single biased model, deployed across millions of devices, can shape perceptions on a global scale, influencing everything from hiring decisions to social interactions.

A counterpoint to this is that AI exposes our latent biases, bringing them to the surface in ways we may not always anticipate. In many ways, we have become desensitised to the reinforcement of stereotypes in social media, entertainment, and the press. However, when AI-generated images reflect these biases, they appear more jarring, forcing us to confront the limitations of our media landscape. This dissonance reveals how pervasive biases shape what AI perceives as ‘normal’ or ‘standard’ representations, highlighting the deep influence of our cultural outputs on machine learning models.

What Schools Must Do

Faced with this challenge, schools occupy a unique position. They are not merely consumers of AI, but incubators of its future developers, users, and regulators. This dual role places a responsibility on educational institutions to teach students how to engage critically and ethically with AI.

The first step is awareness. Many students — and even teachers — assume that AI outputs are neutral or objective. Lessons should begin by debunking this myth, highlighting examples of bias in generative AI systems. Case studies such as AI recruiting tools discriminating against women or the hypersexualisation of female avatars in AI-generated art provide stark illustrations of the problem.

Next comes the task of equipping students with tools to identify and address bias. Just as we might teach students about content creation for film or television, this involves fostering a robust understanding of how AI systems are trained and the role of datasets in shaping their outputs. Exercises could include comparing AI-generated results with real-world statistics or cultural representations, encouraging students to spot discrepancies and question their origins.

Ethical frameworks must also form part of the curriculum. Borrowing from the principles of fairness and transparency, students can be taught to evaluate AI tools critically. This includes teaching students to craft responsible prompts and assess the results critically.  For example, does the system provide a clear rationale for its outputs? How does it handle errors, and who is affected by them? By asking themselves these questions and challenging ‘the bots’, students develop the skills needed to distance themselves from the AI output, as well as pushing for better practices in AI development and deployment.  

Students should be encouraged to consider who or what benefits from these tools locally and globally, and who might be marginalised by their use. This can be in terms of understanding the industry and financial implications of open AI but also political power and sustainability issues. For example, experts estimate that a single Generative AI query has four to five times the carbon footprint of that of a search engine query. Here is a lesson in not just stereotypes, but in terms of news values, the industry and privilege. 

Importantly, teaching about AI bias should not occur in isolation. Schools should try to integrate these lessons into broader discussions about equity, diversity, and inclusion, as well as global citizenship and social justice. By connecting the biases in AI to systemic issues in society, educators can help students understand the stakes of ethical AI use. This approach not only enriches their understanding but also empowers them to see themselves as agents of change.

Schools should also model ethical AI use in their operations. This could mean auditing the tools they adopt for classroom use or collaborating with developers to improve fairness in educational AI systems. By taking these steps, schools signal to students that ethical considerations are not merely academic but integral to real-world decision-making.

Towards an Ethical AI Future

As AI continues to shape society, the stakes of addressing its biases become ever higher. The danger is not simply that stereotypes will persist but that they will be ingrained into the digital fabric of our lives, becoming harder to challenge or undo. Schools have a vital role to play in countering this trend.

By fostering critical awareness, teaching ethical frameworks, and modelling best practices, educators can ensure that students are not only consumers of AI but also shaping its future. In doing so, they help to create a world where technology reflects our highest aspirations rather than our prejudices and stereotypes.  

Further reading

Dan Fitzpatrick, Infinite Education (2024) and The AI Classroom

Edufuturists - newsletter, podcasts and events. Nic was awarded ‘Diversity and Inclusion Champion of the Year’. Here is her recent podcast where she spoke about deficit data models, AI and #SmashingStereotypes with the gang!  

Smashing Stereotypes - The site to help your students get #SmashingStereotypes - https://www.smashingstereotypes.co.uk/ Sadly too many role models for smashing stereotypes just do not relate to kids, but know what does? Other kids! The all year round open competition for children and young people of all ages - that allows you to invite your students to create content and we will publish it on our site! Great examples too that show how other schools and students have done this for themselves!

GEC FREE RESOURCES - https://www.thegec.education/cultural-resources From our series of blogs on AI and social media with the Circle, to our ‘GEC Know How’ directory of EDI books and materials, we have all you need to get #Smashing Stereotypes (well, we did come up with the hashtag!). Jump in and share across your school or trust today!