Resources

Explore a wide range of valuable resources on GCED to deepen your understanding and enhance your research, advocacy, teaching, and learning.

  • Searching...
Advanced search
ยฉ APCEIU

2 Results found

Challenging Systematic Prejudices: An Investigation into Bias Against Women and Girls in Large Language Models Year of publication: 2024 Author: Daniel Van Niekerk | Maria Perรฉz Ortiz | John Shaw-Taylor | Davor Orlic | Ivana Drobnjak | Jackie Kay | Noah Siegel | Katherine Evans | Nyalleng Moorosi | Tina Eliassi-Rad | Leone Maria Tanczer | Wayne Holmes | Marc Peter Deisenroth | Isabel Straw | Maria Fasli | Rachel Adams | Nuria Oliver | Dunja Mladeniฤ‡ | Urvashi Aneja | Madeleine Janicky Corporate author: UNESCO | International Research Centre on Artificial Intelligence (IRCAI) This study explores biases in three significant large language models (LLMs): OpenAIโ€™s GPT-2 and ChatGPT, along with Metaโ€™s Llama 2, highlighting their role in both advanced decision-making systems and as user-facing conversational agents. Across multiple studies, the brief reveals how biases emerge in the text generated by LLMs, through gendered word associations, positive or negative regard for gendered subjects, or diversity in text generated by gender and culture. The research uncovers persistent social biases within these state-of-the-art language models, despite ongoing efforts to mitigate such issues. The findings underscore the critical need for continuous research and policy intervention to address the biases that exacerbate as these technologies are integrated across diverse societal and cultural landscapes. The emphasis on GPT-2 and Llama 2 being open-source foundational models is particularly noteworthy, as their widespread adoption underlines the urgent need for scalable, objective methods to assess and correct biases, ensuring fairness in AI systems globally. โ€œI Donโ€™t Have a Gender, Consciousness, or Emotions. Iโ€™m Just a Machine Learning Modelโ€ Year of publication: 2023 Corporate author: UNESCO | International Research Centre on Artificial Intelligence (IRCAI) An introduction to a forthcoming Gender bias in Artificial Intelligence report coming out on March 8, 2024. As we stand on the precipice of a technological revolution driven by Artificial Intelligence (AI), it is imperative to ensure that this future is shaped equitably, representing all genders. With this essay we are excited to announce our forthcoming in-depth report on Gender and Artificial Intelligence in a partnership between IRCAI and UNESCO, set for release on March 8, 2024. As we prepare for this milestone event, we extend an invitation to experts, scholars, and all interested stakeholders to join us in our research.