1,831 research outputs found
Investigating Performance and Usage of Input Methods for Soft Keyboard Hotkeys
Touch-based devices, despite their mainstream availability, do not support a
unified and efficient command selection mechanism, available on every platform
and application. We advocate that hotkeys, conventionally used as a shortcut
mechanism on desktop computers, could be generalized as a command selection
mechanism for touch-based devices, even for keyboard-less applications. In this
paper, we investigate the performance and usage of soft keyboard shortcuts or
hotkeys (abbreviated SoftCuts) through two studies comparing different input
methods across sitting, standing and walking conditions. Our results suggest
that SoftCuts not only are appreciated by participants but also support rapid
command selection with different devices and hand configurations. We also did
not find evidence that walking deters their performance when using the Once
input method.Comment: 17+2 pages, published at Mobile HCI 202
AUGMENTED TOUCH INTERACTIONS WITH FINGER CONTACT SHAPE AND ORIENTATION
Touchscreen interactions are far less expressive than the range of touch that human hands are capable of - even considering technologies such as multi-touch and force-sensitive surfaces. Recently, some touchscreens have added the capability to sense the actual contact area of a finger on the touch surface, which provides additional degrees of freedom - the size and shape of the touch, and the finger's orientation. These additional sensory capabilities hold promise for increasing the expressiveness of touch interactions - but little is known about whether users can successfully use the new degrees of freedom. To provide this baseline information, we carried out a study with a finger-contact-sensing touchscreen, and asked participants to produce a range of touches and gestures with different shapes and orientations, with both one and two fingers. We found that people are able to reliably produce two touch shapes and three orientations across a wide range of touches and gestures - a result that was confirmed in another study that used the augmented touches for a screen lock application
Improving Multi-Touch Interactions Using Hands as Landmarks
Efficient command selection is just as important for multi-touch devices as it is for traditional interfaces that follow the Windows-Icons-Menus-Pointers (WIMP) model, but rapid selection in touch interfaces can be difficult because these systems often lack the mechanisms that have been used for expert shortcuts in desktop systems (such as keyboards shortcuts). Although interaction techniques based on spatial memory can improve the situation by allowing fast revisitation from memory, the lack of landmarks often makes it hard to remember command locations in a large set. One potential landmark that could be used in touch interfaces, however, is people’s hands and fingers: these provide an external reference frame that is well known and always present when interacting with a touch display. To explore the use of hands as landmarks for improving command selection, we designed hand-centric techniques called HandMark menus. We implemented HandMark menus for two platforms – one version that allows bimanual operation for digital tables and another that uses single-handed serial operation for handheld tablets; in addition, we developed variants for both platforms that support different numbers of commands. We tested the new techniques against standard selection methods including tabbed menus and popup toolbars. The results of the studies show that HandMark menus perform well (in several cases significantly faster than standard methods), and that they support the development of spatial memory. Overall, this thesis demonstrates that people’s intimate knowledge of their hands can be the basis for fast interaction techniques that improve performance and usability of multi-touch systems
LipLearner: Customizable Silent Speech Interactions on Mobile Devices
Silent speech interface is a promising technology that enables private
communications in natural language. However, previous approaches only support a
small and inflexible vocabulary, which leads to limited expressiveness. We
leverage contrastive learning to learn efficient lipreading representations,
enabling few-shot command customization with minimal user effort. Our model
exhibits high robustness to different lighting, posture, and gesture conditions
on an in-the-wild dataset. For 25-command classification, an F1-score of 0.8947
is achievable only using one shot, and its performance can be further boosted
by adaptively learning from more data. This generalizability allowed us to
develop a mobile silent speech interface empowered with on-device fine-tuning
and visual keyword spotting. A user study demonstrated that with LipLearner,
users could define their own commands with high reliability guaranteed by an
online incremental learning scheme. Subjective feedback indicated that our
system provides essential functionalities for customizable silent speech
interactions with high usability and learnability.Comment: Conditionally accepted to the ACM CHI Conference on Human Factors in
Computing Systems 2023 (CHI '23
EdgeGlass: Exploring Tapping Performance on Smart Glasses while Sitting and Walking
Department of Human Factors EngineeringCurrently, smart glasses allow only touch sensing area which supports front mounted touch pads. However, touches on top, front and bottom sides of glass mounted touchpad is not yet explored. We made a customized touch sensor (length: 5-6 cm, height: 1 cm, width: 0.5 cm) featuring the sensing on its top, front, and bottom surfaces. For doing that, we have used capacitive touch sensing technology (MPR121 chips) with an electrode size of ~4.5 mm square, which is typical in the modern touchscreens. We have created a hardware system which consists of a total of 48 separate touch sensors. We investigated the interaction technique by it for both the sitting and walking situation, using a single finger sequential tapping and a pair finger simultaneous tapping. We have divided each side into three equal target areas and this separation made a total of 36 combinations. Our quantitative result showed that pair finger simultaneous tapping touches were faster, less error-prone in walking condition, compared to single finger sequential tapping into walking condition. Whereas, single finger sequence tapping touches were slower, but less error-prone in sitting condition, compared to pair simultaneous tapping in sitting condition. However, single finger sequential tapping touches were slower, much less error-prone in sitting condition compared to walking. Interestingly, double finger tapping touches had similar performance result in terms of both, error rate and completion time, in both sitting and walking conditions. Mental, physical, performance, effort did not have any effect on any temporal tapping???s and body poses experience of workload. In case of the parameter of temporal demand, for single finger sequential tapping mean temporal (time pressure) workload demand was higher than pair finger simultaneous tapping but body poses did not affect temporal (time pressure) workload for both of the sequential and simultaneous tapping type. In case of the parameter of frustration, the result suggested that mean frustration workload was higher for single finger sequential tapping experienced by the participants compared to pair finger simultaneous tapping and among body poses, walking experienced higher frustration mean workload than sitting. The subjective measure of overall workload during the performance study showed no significant difference between both independent variable: body pose (sitting and walking) and temporal tapping (single finger sequential tapping and pair finger simultaneous tapping).ope
Press-n-Paste : Copy-and-Paste Operations with Pressure-sensitive Caret Navigation for Miniaturized Surface in Mobile Augmented Reality
Publisher Copyright: © 2021 ACM.Copy-and-paste operations are the most popular features on computing devices such as desktop computers, smartphones and tablets. However, the copy-and-paste operations are not sufficiently addressed on the Augmented Reality (AR) smartglasses designated for real-time interaction with texts in physical environments. This paper proposes two system solutions, namely Granularity Scrolling (GS) and Two Ends (TE), for the copy-and-paste operations on AR smartglasses. By leveraging a thumb-size button on a touch-sensitive and pressure-sensitive surface, both the multi-step solutions can capture the target texts through indirect manipulation and subsequently enables the copy-and-paste operations. Based on the system solutions, we implemented an experimental prototype named Press-n-Paste (PnP). After the eight-session evaluation capturing 1,296 copy-and-paste operations, 18 participants with GS and TE achieve the peak performance of 17,574 ms and 13,951 ms per copy-and-paste operation, with 93.21% and 98.15% accuracy rates respectively, which are as good as the commercial solutions using direct manipulation on touchscreen devices. The user footprints also show that PnP has a distinctive feature of miniaturized interaction area within 12.65 mm∗14.48 mm. PnP not only proves the feasibility of copy-and-paste operations with the flexibility of various granularities on AR smartglasses, but also gives significant implications to the design space of pressure widgets as well as the input design on smart wearables.Peer reviewe
1D-Touch: NLP-Assisted Coarse Text Selection via a Semi-Direct Gesture
Existing text selection techniques on touchscreen focus on improving the
control for moving the carets. Coarse-grained text selection on word and phrase
levels has not received much support beyond word-snapping and entity
recognition. We introduce 1D-Touch, a novel text selection method that
complements the carets-based sub-word selection by facilitating the selection
of semantic units of words and above. This method employs a simple vertical
slide gesture to expand and contract a selection area from a word. The
expansion can be by words or by semantic chunks ranging from sub-phrases to
sentences. This technique shifts the concept of text selection, from defining a
range by locating the first and last words, towards a dynamic process of
expanding and contracting a textual semantic entity. To understand the effects
of our approach, we prototyped and tested two variants: WordTouch, which offers
a straightforward word-by-word expansion, and ChunkTouch, which leverages NLP
to chunk text into syntactic units, allowing the selection to grow by
semantically meaningful units in response to the sliding gesture. Our
evaluation, focused on the coarse-grained selection tasks handled by 1D-Touch,
shows a 20% improvement over the default word-snapping selection method on
Android
Investigating Performance and Usage of Input Methods for Soft Keyboard Hotkeys
International audienceTouch-based devices, despite their mainstream availability, do not support a unified and efficient command selection mechanism, available on every platform and application. We advocate that hotkeys, conventionally used as a shortcut mechanism on desktop computers, could be generalized as a command selection mechanism for touch-based devices, even for keyboard-less applications. In this paper, we investigate the performance and usage of soft keyboard shortcuts or hotkeys (abbreviated SoftCuts) through two studies comparing different input methods across sitting, standing and walking conditions. Our results suggest that SoftCuts not only are appreciated by participants but also support rapid command selection with different devices and hand configurations. We also did not find evidence that walking deters their performance when using the Once input method
- …