Abstract
This study examines knowledge workers’ perspectives on privacy implications and surveillant functionalities of group-based communicative AI in the context of work. The objective is to connect the ongoing examination of communicative AI in the workplace to research on privacy and employee surveillance, and to use the framework of privacy as contextual integrity as a theoretical lens through which some of the user perspectives on communicative AI tools are analyzed. Building on 33 qualitative interviews with Finnish knowledge workers, who were presented with scenarios of communicative AI functioning as a ‘team member’ in a work-related chat group, we find that knowledge workers recognize several privacy-related risks in using group-based communicative AI at work, and commonly use surveillant reasoning and logics to make sense of the suggested technology and its privacy implications. We present three distinct approaches knowledge workers had toward privacy in this setting: the detached, the compartmentalized, and the affective. Our findings suggest a wider need to consider communicative AI’s agency and role as an actor contributing to increasing privacy concerns. We highlight how, due to its human-like nature, data-reliant functionalities, and potential capabilities when operating in a group setting, the technology introduces negative affective responses and novel concerns related to privacy and employee surveillance. Specifically, privacy concerns relating to communicative AI expand beyond mere data protection issues to include concerns of how the technology is used by management, how the technology itself interacts within the organization, and how it will develop in the future.
| Original language | English |
|---|---|
| Journal | Convergence |
| DOIs | |
| Publication status | E-pub ahead of print - 23 Dec 2025 |
| Publication type | A1 Journal article-refereed |
Publication forum classification
- Publication forum level 2