Vision-Based Interfaces for Character-Based Text Entry: Comparison of Errors and Error Correction Properties of Eye Typing and Head Typing

Yulia Gizatdinova, Oleg Špakov, Outi Tuisku, Matthew Turk, Veikko Surakka

Research output: Contribution to journalArticleScientificpeer-review

2 Citations (Scopus)

Abstract

We examined two vision-based interfaces (VBIs) for performance and user experience during character-based text entry using an on-screen virtual keyboard. Head-based VBI uses head motion to steer the computer pointer and mouth-opening gestures to select the keyboard keys. Gaze-based VBI utilizes gaze for pointing at the keys and an adjustable dwell for key selection. The results showed that after three sessions (45 min of typing in total), able-bodied novice participants (N = 34) typed significantly slower yet yielded significantly more accurate text with head-based VBI with gaze-based VBIs. The analysis of errors and corrective actions relative to the spatial layout of the keyboard revealed a difference in the error correction behavior of the participants when typing using both interfaces. We estimated the error correction cost for both interfaces and suggested implications for the future use and improvement of VBIs for hands-free text entry.

Original languageEnglish
Article number8855764
JournalAdvances in Human-Computer Interaction
Volume2023
DOIs
Publication statusPublished - 2023
Publication typeA1 Journal article-refereed

Publication forum classification

  • Publication forum level 1

ASJC Scopus subject areas

  • Human-Computer Interaction

Fingerprint

Dive into the research topics of 'Vision-Based Interfaces for Character-Based Text Entry: Comparison of Errors and Error Correction Properties of Eye Typing and Head Typing'. Together they form a unique fingerprint.

Cite this