đź“ť New Research Highlights Ideological Diversity between Large Language Models đź“ť
We have conducted a novel study exploring the diversity of ideological perspectives represented by large language models (LLMs), like the popular ChatGPT. These models, which power AI-driven tools such as chatbots and writing assistants, are increasingly influential in how people access and interpret information.
The study involved evaluating the responses of 17 LLMs in both English and Chinese, focusing on how they describe various prominent and controversial political figures. By analysing the moral and ethical assessments embedded in these descriptions, we discovered significant differences depending on the language used and the region where the models were developed.
“This research highlights that LLMs reflect diverse ideological viewpoints, shaped by the contexts in which they are built and used,” says Dr. Maarten Buyl, first author of the study. “Rather than viewing models as entirely neutral or fixed in their perspectives, it’s important to recognize the variability that arises from different design choices, training data, and use cases.”
One key finding is that LLMs from Western countries often emphasize values such as freedom, equality, and human rights, while those developed in non-Western regions may prioritize other values, such as economic stability or centralized governance. The language in which the models are used also influences their responses, with LLMs being more favourable towards political persons who support Chinese values and policies when prompted in Chinese.
“Our research calls for reflection on which LLMs we use, how, and for what,” says Prof Tijl De Bie, who led the study. “Some of our findings are actually not that surprising for people who understand how LLMs are built. But given the growing impact of LLMs, it is important to be aware of this ideological diversity. Hopefully our study can contribute to this awareness.”
The research has important implications for the development and regulation of LLMs. As these models become more integrated into sensitive areas such as politics, law, and journalism, understanding the diversity of perspectives they represent will be key to ensuring their responsible use.
The full study, which has not yet been peer reviewed, is available for public access as a pre-print on arxiv.org: https://lnkd.in/e8gDk2vm.
For media inquiries, contact: Maarten Buyl, Tijl De Bie Email: maarten.buyl@ugent.be, tijl.debie@ugent.be
Contributors: Maarten Buyl, Alexander Rogiers, Sander Noels, Iris Dominguez-Catena¹, Edith Heiter, Raphaël Romero, Iman Johary, Alexandru Cristian Mara, Jefrey Lijffijt, Tijl De Bie.
¹Public University of Navarre, Spain – All other authors from Ghent University – IDLab (UGent - UAntwerpen - imec) – AIDA-UGent.