Panels of peers are needed to gauge AI’s trustworthiness — experts are not enough
In a thought-provoking article published in *Nature*, experts argue that assessing the trustworthiness of artificial intelligence (AI) systems requires more than just expert opinions; it necessitates the establishment of panels composed of diverse peer stakeholders. The authors emphasize that the rapid advancement of AI technologies has outpaced the development of frameworks to evaluate their reliability and ethical implications. This gap has led to a growing concern about the potential risks associated with AI, including bias, misinformation, and lack of transparency. Traditional methods of evaluation, which often rely solely on expert assessments, may not adequately capture the multifaceted nature of AI systems and their impact on society.
The article highlights the importance of including a variety of perspectives in the evaluation process. By forming panels that incorporate voices from different sectors—such as ethicists, sociologists, technologists, and community representatives—stakeholders can better understand the societal implications of AI technologies. For instance, a diverse panel could more effectively identify biases in AI algorithms that might disproportionately affect marginalized communities. The authors provide examples of existing initiatives where such collaborative evaluations have led to more robust and equitable AI systems, underscoring the need for a collective approach to governance in this rapidly evolving field.
Moreover, the call for peer panels reflects a broader trend towards democratizing technology assessment, where the input of non-experts is valued alongside that of specialists. This shift acknowledges that AI does not operate in a vacuum; its consequences ripple through various aspects of daily life, influencing everything from healthcare to criminal justice. By fostering inclusive discussions around AI’s development and deployment, the article argues, society can better navigate the ethical dilemmas posed by these technologies and ensure that they serve the public good. As the conversation around AI trustworthiness continues to evolve, the integration of diverse perspectives will be crucial in shaping a future where technology is both innovative and accountable.
https://www.youtube.com/watch?v=HM7nbwSNZpc
Nature, Published online: 18 November 2025;
doi:10.1038/d41586-025-03783-1
Panels of peers are needed to gauge AI’s trustworthiness — experts are not enough