Skip to main navigation Skip to search Skip to main content

Securing local LLMs for academic research: a human-system integration analysis and evolution of TAUCHI-GPT

Research output: Contribution to journalArticleScientificpeer-review

2 Downloads (Pure)

Abstract

The application of Large Language Models (LLMs) in academic research faces unique challenges of privacy and workflow integration. This paper introduces TAUCHI-GPT, a novel, open-source AI assistant whose evolution informs our analysis. We detail its two versions: a cloud-based V1 using GPT-4 and reflection cycles, and a local, privacy-preserving V2 with RAG architecture. Based on empirical findings from two user studies, we present a critical Human-System Integration (HSI) analysis of the security vulnerabilities and alignment challenges inherent in local LLM deployments. We examine how recent development trends—such as model distillation and reward-model learning—and the complexities of internal model mechanisms exacerbate risks like prompt injection, RAG data failures, and unfaithful explanations that impact user trust. Drawing from HCI principles and mechanistic interpretability insights, we propose and discuss a multi-layered mitigation strategy. This work contributes significantly to HSI and AI by presenting an evaluated system, a rigorous analysis of local deployment risks from a sociotechnical perspective, and actionable, stakeholder-specific guidelines for the secure and responsible utilization of LLMs in academia.
Original languageEnglish
JournalHuman-intelligent systems integration
DOIs
Publication statusE-pub ahead of print - 8 Dec 2025
Publication typeA1 Journal article-refereed

Publication forum classification

  • Publication forum level 1

Fingerprint

Dive into the research topics of 'Securing local LLMs for academic research: a human-system integration analysis and evolution of TAUCHI-GPT'. Together they form a unique fingerprint.

Cite this