A purple bubble with the text:

Human Rights Assessment of Generative AI

Digital technologies have touched every facet of society from personal communication to public discourse, and are entering a new phase with the advent of generative artificial intelligence. Through large language models (LLMs), generative AI has the potential to generate text, images, and other media at an unprecedented speed and scale. However, despite its growing influence, there is an absence of public-facing research and analysis of the impact and potential impact these models have on human rights. HRC is currently conducting a human rights assessment of LLMs. Our human rights assessment has two parts. The first is a human rights impact assessment of the use of LLMs by educators, legal professionals, and journalists, which includes interviews of experts and practitioners around the world, a review of relevant literature, and an assessment of the human rights risks and opportunities of using LLMs in these three fields. This is paired with a model evaluation of ChatGPT, Gemini, Claude, and LLaMA to surface the risks of investigative journalists using LLMs in their work. We will produce recommendations for companies developing LLMs to better minimize human rights risks and maximize human rights opportunities.

Four panelists speak to a crowd of students in a university room.

Above Left: (L-R) Vyoma Raman, Betsy Popken, Vanja Skoric, and Marlena Wisnia on a discussion about human rights stakeholder engagement in AI and large language models between the Human Rights Center and European Center for Not-for-Profit Law on November 9, 2023 at Berkeley Law.