Whose Voice Do We Really Hear When AI Speaks?

No Image
No Image
Source Link

In a world where AI systems are increasingly entrenched in both public services and private sectors, the cultural and linguistic biases embedded within these systems are becoming more apparent and pressing. Giada Pistilli's insightful discourse at the NORA Annual Conference shed light on the ethical implications of Large Language Models (LLMs), emphasizing how these models, such as those behind the CIVICS project by Hugging Face, are influenced by the languages and cultural values they are trained on. By grounding datasets in real-world, culturally rich contexts without reliance on synthetic translations, CIVICS has exposed how models may respond differently based on language, revealing underlying inconsistencies and biases. The challenge lies not just in optimizing accuracy but ensuring cultural alignment, representational fairness, and value-sensitive behavior. This requires authentic partnerships with local communities and a commitment to diversity in model training, urging us to rethink how we develop AI systems to truly respect and reflect global pluralism, rather than reinforce existing inequalities.