A Fitts Law Evaluation of Two Type Face Tracking Inputs on Electronic Devices
Research on comparison of typical Face tracking and Eye-tracking on mobile devices. This study evaluates the usability and efficiency of position-based and orientation-based head-tracking input methods for hands-free device control.
Overview
This study evaluates the usability and efficiency of two head-tracking input methods—position-based and orientation-based—for controlling electronic devices in a hands-free context. With the growing demand for intuitive, non-contact interactions, such as navigating smart devices while cooking, the research investigates how face-tracking can enable pointer control without external devices.

Inspired by Fitts’ Law, the experiment focused on one-dimensional target acquisition tasks, comparing performance metrics like throughput, movement time, and error rate.
Research Methodology
Participants used a laptop with a built-in webcam to execute tasks under two conditions:
- Head Position Tracking - Movement based on head position in space
- Head Orientation Tracking - Movement based on head rotation/angle

Experimental Design
The study employed a controlled experimental setup to measure:
- Throughput - Overall efficiency of the input method
- Movement Time - Time taken to reach targets
- Error Rate - Frequency of missed or incorrect selections
- User Comfort - Subjective feedback on ease of use

Key Findings
Position-Based Input Superiority
Results indicate that position-based input yielded:
- ✅ Higher throughput
- ✅ Lower movement time
- ✅ Better accuracy and efficiency
- ✅ More intuitive user experience

Challenges Identified
However, several challenges impacted user performance:
- Tracking Reliability - Variations in tracking accuracy
- Indoor Lighting Variability - Environmental factors affecting detection
- User Fatigue - Extended use causing discomfort
- Calibration Requirements - Need for individual user calibration

Technical Implementation
Technologies Used
- Computer Vision: OpenCV for face detection and tracking
- Machine Learning: Face landmark detection algorithms
- Web Technologies: JavaScript for real-time processing
- Data Analysis: Python for statistical analysis of results

Tracking Algorithms
The study implemented custom algorithms for:
- Real-time face detection
- Position and orientation calculation
- Smoothing and filtering of tracking data
- Adaptive calibration based on user behavior
Applications and Impact
Practical Applications
This research contributes to the design of accessible and practical hands-free interfaces for:
- Smart Home Devices - Control without touching surfaces
- Cooking Assistance - Navigate recipes while hands are busy
- Accessibility Tools - Support for users with limited mobility
- Healthcare Settings - Hygienic, contactless interactions
- Industrial Applications - Hands-free operation in manufacturing

Design Implications
The findings provide valuable insights for:
- Interface designers creating hands-free systems
- Developers implementing face-tracking features
- Researchers studying alternative input methods
- Product managers planning accessible technology
Future Research Directions
Potential Improvements
- Enhanced Tracking Algorithms - More robust detection in various lighting conditions
- Hybrid Approaches - Combining position and orientation for optimal performance
- Adaptive Systems - Learning user preferences and adjusting accordingly
- Multi-Modal Input - Integrating voice and gesture controls

Extended Studies
Future research could explore:
- Long-term usability and user adaptation
- Performance in real-world scenarios
- Integration with other assistive technologies
- Cross-cultural differences in interaction preferences
Conclusion
This study demonstrates that position-based head tracking offers superior performance compared to orientation-based methods for hands-free device control. While challenges remain in tracking reliability and environmental sensitivity, the findings pave the way for enhanced usability in diverse contexts.

The research contributes valuable insights to the field of Human-Computer Interaction, particularly in designing accessible, intuitive, and practical hands-free interfaces for smart devices.
Research Team
- Temirlan Dzhoroev - Software Engineer & Researcher
- Joe, BH Kim - Software Engineer & Researcher
Publication
This research was presented at HCI Korea 2022:
Dzhoroev, T., Kim, B.H., & Lee, H.S. (2022). Comparison of Face Tracking and Eye Tracking for Scrolling a Web Browser on Mobile Devices. HCI Korea, pp. 227–231.
-
Publications
-
Building an AI Copilot with RAG & Vector Search
-
Architecting a Web-based 3D Game Engine
-
A Fitts Law Evaluation of Two Type Face Tracking Inputs on Electronic Devices
-
Computational Design: Transforming Palm Lines into Art
Related:
Written by
Temirlan Dzhoroev
Full-stack AI Engineer
Frontend 3D Developer