Artificial intelligence rapidly transforms our world, driving innovation and efficiency across industries. However, as we embrace this powerful technology, we face one critical issue: the lack of representation in AI systems. This exclusion poses practical and ethical risks that demand our acute and deliberative attention.

Digital Division

Today, approximately 2.6 billion people lack internet access due to high costs, inadequate infrastructure, and a lack of digital skills. Moreover, around 3.1 billion people face regular electricity shortages, further isolating them from the digital world. This creates or worsens social and economic disparities, restricting access to education, healthcare, and economic opportunities. We might complain about the time that we spend on the phone or with email, yet in 2024, 5% of the global population still lives in areas without access to any mobile network. This is not their choice, and has severe consequences when we look at AI.

The status quo results in an AI that is lopsided and biased, which (in)directly influences what information we consume, jeopardizing our knowledge and perception of reality. (We will look deeper into this topic in a future post). Furthermore, and leaving aside that digital exclusion deprives millions of people of access to telemedicine and the potential of 24/7 quality health care and disease prevention, the limited data pool that is used to train the algorithms results in outputs that reflect only the people from whom this data comes. Without careful oversight, the utilization of these results can have deadly consequences because biased AI training data can lead to algorithmic discrimination.

Algorithmic Discrimination

AI systems that are trained on datasets that reflect historical biases or lack diversity tend to perpetuate and exacerbate existing inequalities. For instance, facial recognition technologies have shown higher error rates for individuals with darker skin tones due to underrepresentation in training data. Similarly, biased hiring algorithms can disadvantage certain groups, perpetuating discrimination. A 2023 study published in Nature found significant performance disparities in large language models across different languages and dialects, with potential disadvantages for speakers of less-represented languages.

In healthcare, the biases resulting from algorithmic bias may lead to inaccurate diagnoses and harmful treatment recommendations. For example, an AI trained primarily on data from one ethnic group might fail to recognize symptoms or risk factors that present differently in other populations. AI systems trained on non-representative data can lead to incorrect diagnoses, endangering lives in underrepresented communities as stressed by The World Health Organization.

In the long term, the influence of AI on decision-making in healthcare, finance, and education further enshrines systemic discrimination. The 2023 Global Risks Report highlighted how biased AI systems could exacerbate existing inequalities and create new forms of discrimination. In healthcare, this could mean entire communities being systematically underserved or misdiagnosed due to AI systems that do not account for their unique health profiles.

Ethical Implications

Beyond practical risks, this has profound ethical implications. Equal participation in technological advancement is a fundamental human right, and when AI systems exclude large portions of humanity, they violate this principle.

While alarming in regards to every domain of our life and work, the perspectives of biased algorithms in health are particularly worrisome. As AI systems begin to shape medical research priorities and influence global health policies, the exclusion of diverse voices could lead to a narrowing focus in medical advancements. This not only diminishes our collective ability to address global health challenges but also risks creating a future where the health needs of billions are systematically marginalized.

Cultivating Inclusive AI

Addressing this challenge before it becomes a catalyst for future pandemics requires a concerted effort from AI researchers, tech companies, policymakers, nonprofits, and global communities. Here are six key takeaways to guide our path toward more inclusive AI, with a particular focus on healthcare:

  1. Actively diversify AI training datasets: Tech companies and medical researchers can deliberately prioritize collecting and incorporating (health) data from underrepresented regions and communities.
  2. Grow diverse AI talent: Nonprofits can launch initiatives to nurture AI talent in underrepresented communities, which is crucial for bringing diverse perspectives into AI development, especially in medical AI.
  3. Engage global stakeholders: Policymakers can facilitate dialogues between AI developers, healthcare providers, and communities worldwide to ensure AI systems address diverse health needs.
  4. Normalize ethical audits: Tech and healthcare industries can commit to regular audits of AI systems for bias and exclusion as a standard practice
  5. Create inclusive AI governance: Companies and countries can systematically prioritize representation and inclusion as core principles in Global AI governance frameworks, with special attention to healthcare applications.
  6. Yield to local expertise: AI developers can collaborate with local (medical) experts before deploying AI systems in diverse cultural contexts.

The acronym formed by these takeaways – AGENCY – serves as a reminder that our goal should be to empower all of humanity in the age of AI, not just a privileged few. We are currently cruising in an age of digital poverty. We have the choice to turn the tide towards digital and analogue abundance.

Source link