The future of AI is social-centered


Founded by Professor Skyler Wang at McGill University, the Social-Centered AI (SCAI; pronounced ‘sky’) Lab is a research collective that supports interdisciplinary projects at the intersection of AI & Society.

A group of six people are gathered around a wooden table in a cozy restaurant. The table is filled with plates of food, glasses of drinks, and utensils. The restaurant has warm lighting, wooden walls, and a window showing the street outside with some visible signs.
Two people sitting at a wooden table in a cafe. The person on the left has light curly hair, is wearing a white t-shirt, and is holding a green glass. The person on the right has dark wavy hair, is wearing a cream-colored sweater, and is using a laptop decorated with stickers. There is a glass of water and a table marker with the number 24 on the table. The cafe has large windows, plants, and warm lighting.
Two people standing at a kitchen table with sushi on a large platter and various drink glasses. The scene is lit by natural daylight coming through large windows.

Some of our best work happens around food.

Principles of SCAI

Social-centered AI calls for a shift away from human-centered individualism toward an approach that evaluates how AI systems reshape social groups, institutions, and the norms that bind them.

From user-machine interactions → context-based understanding

Recognizing that AI must be evaluated not just at the interface level, but within the social contexts, group dynamics, and institutional forces that shape how people actually use—and are affected by—these systems.

From solely taking care of individual users → taking care of society

Going beyond exclusively protecting individual users from poor performance, hallucination, bias, and toxicity to anticipating second-order impacts across education, knowledge work, creative arts, and other societal domains over time.

From net-benefit thinking → equity-driven analysis

Transcending simplistic “good or bad for society” framings to ask who benefits, who is burdened, and how AI impacts are distributed unevenly across communities.

Dive deeper into the full argument in this Big Data & Society article.

Research

Our lab focuses on the societal impact of AI systems, with particular attention to how they reshape relationships, knowledge, and expertise. We study emerging forms of human–AI intimacy, including the rise of AI companions and therapeutic bots and their implications for social and emotional well-being. In parallel, we analyze the production of expert AI data and the evolving role of domain expertise in training and evaluating models. Bringing these strands together, we are broadly interested in the epistemic cultures of machine learning—how knowledge, expertise, and evaluative judgments are produced and contested within contemporary AI systems.

Our work is supported by the Social Sciences and Humanities Research Council (SSHRC) of Canada, the Diverse Intelligence Institute, the Computational and Data Systems Institute, Meta, and Handshake AI.

Join Us

If you share our interest in the sociology of technology & AI or human-computer interaction, do not hesitate to reach out to see if we have any openings for undergraduate/graduate research assistant or affiliate positions. Requests for collaborations are always welcome!