About Us
Who we are and what we work on
Critical Social AI (CritSocAI) is a group of socially-conscious researchers working in diverse fields, from the social sciences and humanities to computer science and engineering. Our aim is to constructively challenge—and animate—the interactions AI plays on research with and for humans.
As AI-enabled tools rapidly become integral to daily personal and professional routines, it is imperative that researchers consider their profound effects on our work practices and outcomes. These tools, which vary widely in functionality, accuracy, and user-friendliness, are predominantly designed by and for those in highly technical fields. Consequently, scholars in the social sciences and humanities are often excluded from critical conversations about how, and why, AI tools should be designed and used. The rise of generative AI is thus already fundamentally altering both the processes and the products of socially-focused scholarship, impacting everything from research methods and final outputs to how we ensure that research is socially beneficial. This presents unique challenges that responsible researchers must navigate.
Critical Social AI is positioned neither as techno-optimist nor techno-pessimist. Instead, we adopt a critically engaged stance. We are interested in expanding our working knowledge of the potential pitfalls and benefits of AI. We believe that these tools should complement rather than replace human research labour, and we seek to promote a cognizant, self-reflective use of AI rather than the passive consumption of its outputs.
The goals of CritSocAI are threefold:
- Advocating for the maintenance of high-quality research standards as AI technologies continue to develop, and reconsidering how AI can support “slow” and “careful” scholarship.
- Publishing works in academic outlets on the use of AI in spaces including (but not limited to) higher education, the research process (including responsible use guidelines), and society more broadly.
- Illuminating how AI development and deployment unevenly affect different communities and epistemic traditions, fostering practices that reveal how technologies can (re)produce various forms of inequality—both old and new.