Abstrakti
The application of Large Language Models (LLMs) in academic research faces unique challenges of privacy and workflow integration. This paper introduces TAUCHI-GPT, a novel, open-source AI assistant whose evolution informs our analysis. We detail its two versions: a cloud-based V1 using GPT-4 and reflection cycles, and a local, privacy-preserving V2 with RAG architecture. Based on empirical findings from two user studies, we present a critical Human-System Integration (HSI) analysis of the security vulnerabilities and alignment challenges inherent in local LLM deployments. We examine how recent development trends—such as model distillation and reward-model learning—and the complexities of internal model mechanisms exacerbate risks like prompt injection, RAG data failures, and unfaithful explanations that impact user trust. Drawing from HCI principles and mechanistic interpretability insights, we propose and discuss a multi-layered mitigation strategy. This work contributes significantly to HSI and AI by presenting an evaluated system, a rigorous analysis of local deployment risks from a sociotechnical perspective, and actionable, stakeholder-specific guidelines for the secure and responsible utilization of LLMs in academia.
| Alkuperäiskieli | Englanti |
|---|---|
| Julkaisu | Human-intelligent systems integration |
| DOI - pysyväislinkit | |
| Tila | E-pub ahead of print - 8 jouluk. 2025 |
| OKM-julkaisutyyppi | A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä |
Julkaisufoorumi-taso
- Jufo-taso 1
Sormenjälki
Sukella tutkimusaiheisiin 'Securing local LLMs for academic research: a human-system integration analysis and evolution of TAUCHI-GPT'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.Siteeraa tätä
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver