DAWARKA, Viraj, Doargajudhur, Melina and Dutot, Vincent (2025) AI in Project Teams: How Trust Calibration Reconfigures Team’s Collaboration and Performance. International Journal of Managing Projects in Business. ISSN 1753-8378 (In Press)
IJMPB-07-2025-0285.R2.pdf - AUTHOR'S ACCEPTED Version (default)
Restricted to Repository staff only until 25 May 2026.
Available under License Type Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0).
Download (1MB) | Request a copy
Abstract or description
As artificial intelligence (AI) becomes increasingly embedded in project-based work, trust calibration, ensuring that trust in AI systems is neither excessive nor insufficient, emerges as a key factor for effective collaboration. This study explores how project professionals calibrate trust in AI and how this process influences team collaboration and performance in technology-mediated project environments.
Guided by socio-technical systems theory (STS) complemented by Adaptive Structuration Theory (AST), the study draws on 40 semi-structured interviews with project professionals across diverse UK industries. Thematic analysis is used to explore participants’ lived experiences of trust calibration, collaboration mechanisms, and perceived team performance in AI-supported settings.
The result indicates that trust in AI is situational, socially distributed, and shaped through ongoing boundary work between human and machine inputs. Enablers such as transparency, role clarity, user experience, cultural norms, and system feedback shape calibration processes. These processes, in turn, influenced collaboration (e.g., delegation of oversight, erosion of informal communication) and performance (e.g., metric-driven evaluation, strategic augmentation of human expertise).
This study contributes to project management and AI adoption research by conceptualising trust calibration as a socio-technical process embedded in team routines, rather than as an individual attitude. It offers an initial conceptual model and a revised conceptual model that links enablers, practices, and outcomes of trust calibration, demonstrating how trust mediates the relationship between AI integration, collaboration, and performance. Beyond applying existing frameworks, this research extends STS and AST by developing new theoretical insights into trust calibration as a mechanism linking AI design, collaboration dynamics, and project performance. findings provide practical guidance for designing trust-aware, human-centred AI practices in project environments.
| Item Type: | Article |
|---|---|
| Faculty: | School of Digital, Technologies and Arts > Computer Science, AI and Robotics |
| Depositing User: | Viraj DAWARKA |
| Date Deposited: | 07 Jan 2026 10:33 |
| Last Modified: | 07 Jan 2026 10:33 |
| URI: | https://eprints.staffs.ac.uk/id/eprint/9468 |
Tools
Tools