2 research outputs found
How far are we from quantifying visual attention in mobile HCI?
With an ever-increasing number of mobile devices competing for our attention,
quantifying when, how often, or for how long users visually attend to their
devices has emerged as a core challenge in mobile human-computer interaction.
Encouraged by recent advances in automatic eye contact detection using machine
learning and device-integrated cameras, we provide a fundamental investigation
into the feasibility of quantifying visual attention during everyday mobile
interactions. We identify core challenges and sources of errors associated with
sensing attention on mobile devices in the wild, including the impact of face
and eye visibility, the importance of robust head pose estimation, and the need
for accurate gaze estimation. Based on this analysis, we propose future
research directions and discuss how eye contact detection represents the
foundation for exciting new applications towards next-generation pervasive
attentive user interfaces.Comment: 7 pages, 4 figure
I Cannot See Students Focusing on My Presentation; Are They Following Me? Continuous Monitoring of Student Engagement through "Stungage"
Monitoring students' engagement and understanding their learning pace in a
virtual classroom becomes challenging in the absence of direct eye contact
between the students and the instructor. Continuous monitoring of eye gaze and
gaze gestures may produce inaccurate outcomes when the students are allowed to
do productive multitasking, such as taking notes or browsing relevant content.
This paper proposes Stungage - a software wrapper over existing online meeting
platforms to monitor students' engagement in real-time by utilizing the facial
video feeds from the students and the instructor coupled with a local on-device
analysis of the presentation content. The crux of Stungage is to identify a few
opportunistic moments when the students should visually focus on the
presentation content if they can follow the lecture. We investigate these
instances and analyze the students' visual, contextual, and cognitive presence
to assess their engagement during the virtual classroom while not directly
sharing the video captures of the participants and their screens over the web.
Our system achieves an overall F2-score of 0.88 for detecting student
engagement. Besides, we obtain 92 responses from the usability study with an
average SU score of 74.18