
When Jess Smith, a former Australian Paralympic swimmer, uploaded a photo of herself into an AI image generator this summer, she inadvertently embarked on a social experiment. Her goal was simple: to enhance her headshot. She uploaded a full-length photo, specifying that she was missing her left arm from below the elbow. However, the AI’s response was unexpected. Despite her precise prompts, the generated images consistently depicted a woman with two arms or one with a prosthetic device.
Smith questioned the AI’s inability to create an accurate representation. The AI explained that it lacked sufficient data to generate the requested image.
“That was an important realization for me that of course AI is a reflection of the world we live in today and the level of inequality and discrimination that exists,” she said.
Recently, Smith attempted the process again and was surprised to find that the AI could now produce an accurate image of a woman with one arm, just like her.
“Oh my goodness, it worked, it’s amazing it’s finally been updated,” she told the BBC. “This is a great step forward.”
AI and the Importance of Representation
While it might seem trivial to some, this advancement holds significant meaning for millions of people with disabilities. Representation in technology is crucial, as it ensures that people are seen not as an afterthought, but as integral parts of the world being constructed.
“AI is evolving, and when it evolves with inclusion at its core, we all benefit. This is more than progress in tech; it’s progress in humanity,” Jess emphasized.
A spokesperson for OpenAI, the company behind ChatGPT, confirmed that they had recently made “meaningful improvements” to their image generation model. They acknowledged ongoing challenges, particularly around fair representation, and stated that they are actively working to improve this by refining post-training methods and incorporating more diverse examples to reduce bias over time.
Challenges and Criticisms
Despite these improvements, challenges remain. Naomi Bowman, who has sight in only one eye, encountered similar issues. When she asked ChatGPT to blur the background of a picture, it altered her face, evening out her eyes.
“Even when I specifically explained that I had an eye condition and to leave my face alone; it couldn’t compute,” she said.
Initially finding it amusing, Naomi now views it as a reflection of inherent AI bias. She advocates for AI models to be “trained and tested in rigorous ways to reduce AI bias and to ensure the data sets are broad enough so that everyone is represented and treated fairly.”
Meanwhile, concerns about AI’s environmental impact have been raised. Professor Gina Neff of Queen Mary University London noted that ChatGPT is “burning through energy,” with data centers consuming more electricity annually than 117 countries.
Broader Implications of AI Bias
Experts assert that bias in artificial intelligence often mirrors societal blind spots, extending beyond disabilities. Abran Maldonado, CEO of Create Labs, a US-based company developing culturally aware AI systems, emphasizes that diversity in AI begins with the individuals involved in data training and labeling.
“It’s about who’s in the room when the data is being built,” he explains. “You need cultural representation at the creation stage.”
Historical examples illustrate these issues. A 2019 US government study found that facial recognition algorithms were significantly less accurate at identifying African-American and Asian faces compared to Caucasian faces. This highlights the necessity of consulting people with lived experiences to ensure comprehensive representation in AI.
The move towards inclusive AI is not just a technological advancement but a societal one. As AI systems continue to evolve, the inclusion of diverse data and perspectives will be crucial in shaping a more equitable digital future.