2,899 research outputs found
Testing Two Tools for Multimodal Navigation
The latest smartphones with GPS, electronic compasses, directional audio, touch screens, and so forth, hold a potential for location-based services that are easier to use and that let users focus on their activities and the environment around them. Rather than interpreting maps, users can search for information by pointing in a direction and database queries can be created from GPS location and compass data. Users can also get guidance to locations through point and sweep gestures, spatial sound, and simple graphics. This paper describes two studies testing two applications with multimodal user interfaces for navigation and information retrieval. The applications allow users to search for information and get navigation support using combinations of point and sweep gestures, nonspeech audio, graphics, and text. Tests show that users appreciated both applications for their ease of use and for allowing users to interact directly with the surrounding environment
Eyewear Computing \u2013 Augmenting the Human with Head-Mounted Wearable Assistants
The seminar was composed of workshops and tutorials on head-mounted eye tracking, egocentric
vision, optics, and head-mounted displays. The seminar welcomed 30 academic and industry
researchers from Europe, the US, and Asia with a diverse background, including wearable and
ubiquitous computing, computer vision, developmental psychology, optics, and human-computer
interaction. In contrast to several previous Dagstuhl seminars, we used an ignite talk format to
reduce the time of talks to one half-day and to leave the rest of the week for hands-on sessions,
group work, general discussions, and socialising. The key results of this seminar are 1) the
identification of key research challenges and summaries of breakout groups on multimodal eyewear
computing, egocentric vision, security and privacy issues, skill augmentation and task guidance,
eyewear computing for gaming, as well as prototyping of VR applications, 2) a list of datasets and
research tools for eyewear computing, 3) three small-scale datasets recorded during the seminar, 4)
an article in ACM Interactions entitled \u201cEyewear Computers for Human-Computer Interaction\u201d,
as well as 5) two follow-up workshops on \u201cEgocentric Perception, Interaction, and Computing\u201d
at the European Conference on Computer Vision (ECCV) as well as \u201cEyewear Computing\u201d at
the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp)
Do That, There: An Interaction Technique for Addressing In-Air Gesture Systems
When users want to interact with an in-air gesture system, they
must first address it. This involves finding where to gesture
so that their actions can be sensed, and how to direct their
input towards that system so that they do not also affect others
or cause unwanted effects. This is an important problem [6]
which lacks a practical solution. We present an interaction
technique which uses multimodal feedback to help users address
in-air gesture systems. The feedback tells them how
(“do that”) and where (“there”) to gesture, using light, audio
and tactile displays. By doing that there, users can direct their
input to the system they wish to interact with, in a place where
their gestures can be sensed. We discuss the design of our
technique and three experiments investigating its use, finding
that users can “do that” well (93.2%–99.9%) while accurately
(51mm–80mm) and quickly (3.7s) finding “there”
The "Seen but Unnoticed" Vocabulary of Natural Touch: Revolutionizing Direct Interaction with Our Devices and One Another (UIST 2021 Vision)
This UIST Vision argues that "touch" input and interaction remains in its
infancy when viewed in context of the seen but unnoticed vocabulary of natural
human behaviors, activity, and environments that surround direct interaction
with displays. Unlike status-quo touch interaction -- a shadowplay of fingers
on a single screen -- I argue that our perspective of direct interaction should
encompass the full rich context of individual use (whether via touch, sensors,
or in combination with other modalities), as well as collaborative activity
where people are engaged in local (co-located), remote (tele-present), and
hybrid work. We can further view touch through the lens of the "Society of
Devices," where each person's activities span many complementary, oft-distinct
devices that offer the right task affordance (input modality, screen size,
aspect ratio, or simply a distinct surface with dedicated purpose) at the right
place and time. While many hints of this vision already exist (see references),
I speculate that a comprehensive program of research to systematically
inventory, sense, and design interactions around such human behaviors and
activities -- and that fully embrace touch as a multi-modal, multi-sensor,
multi-user, and multi-device construct -- could revolutionize both individual
and collaborative interaction with technology.Comment: 5 pages. Non-archival UIST Vision paper accepted and presented at the
34th Annual ACM Symposium on User Interface Software and Technology (UIST
2021) by Ken Hinckley. This is the definitive "published" version as the
Association of Computing Machinery (ACM) does not archive UIST Vision paper
- …