170 research outputs found
An innovative search interface for gesture dictionary
We live in a multicultural world. We need to learn how to communicate with each other, sometimes even without words, using only gestures. To help people better communicate in the multicultural epoch, the German company Fragenstellerin developed the gesture dictionary application on an iOS platform. To cover the bigger population of users, I designed an innovative search interface for gesture dictionary on an Android platform. I applied user-centered design method to the very popular modern industrial task of moving applications from one platform to another. I analyzed the user interface of the iOS Gestunary solution, collected userâs reflections, researched similar products, and gesture coding schemes. I performed three development and testing iterations, including co-design, User-based tests, and SUS tests. I also conducted gesture illustration research, which showed a clear preference towards color photos over drawings and other illustration options. My additional study demonstrated that it is feasible to implement automatic gesture recognition for the Gestunary application. As the main result, I developed an innovative search interface for the Gestunary application on the Android platform
Learning 3D Navigation Protocols on Touch Interfaces with Cooperative Multi-Agent Reinforcement Learning
Using touch devices to navigate in virtual 3D environments such as computer
assisted design (CAD) models or geographical information systems (GIS) is
inherently difficult for humans, as the 3D operations have to be performed by
the user on a 2D touch surface. This ill-posed problem is classically solved
with a fixed and handcrafted interaction protocol, which must be learned by the
user. We propose to automatically learn a new interaction protocol allowing to
map a 2D user input to 3D actions in virtual environments using reinforcement
learning (RL). A fundamental problem of RL methods is the vast amount of
interactions often required, which are difficult to come by when humans are
involved. To overcome this limitation, we make use of two collaborative agents.
The first agent models the human by learning to perform the 2D finger
trajectories. The second agent acts as the interaction protocol, interpreting
and translating to 3D operations the 2D finger trajectories from the first
agent. We restrict the learned 2D trajectories to be similar to a training set
of collected human gestures by first performing state representation learning,
prior to reinforcement learning. This state representation learning is
addressed by projecting the gestures into a latent space learned by a
variational auto encoder (VAE).Comment: 17 pages, 8 figures. Accepted at The European Conference on Machine
Learning and Principles and Practice of Knowledge Discovery in Databases 2019
(ECMLPKDD 2019
Do (and say) as I say: Linguistic adaptation in human-computer dialogs
© Theodora Koulouri, Stanislao Lauria, and Robert D. Macredie. This article has been made available through the Brunel Open Access Publishing Fund.There is strong research evidence showing that people naturally align to each otherâs vocabulary, sentence structure, and acoustic features in dialog, yet little is known about how the alignment mechanism operates in the interaction between users and computer systems let alone how it may be exploited to improve the efficiency of the interaction. This article provides an account of lexical alignment in humanâcomputer dialogs, based on empirical data collected in a simulated humanâcomputer interaction scenario. The results indicate that alignment is present, resulting in the gradual reduction and stabilization of the vocabulary-in-use, and that it is also reciprocal. Further, the results suggest that when system and user errors occur, the development of alignment is temporarily disrupted and users tend to introduce novel words to the dialog. The results also indicate that alignment in humanâcomputer interaction may have a strong strategic component and is used as a resource to compensate for less optimal (visually impoverished) interaction conditions. Moreover, lower alignment is associated with less successful interaction, as measured by user perceptions. The article distills the results of the study into design recommendations for humanâcomputer dialog systems and uses them to outline a model of dialog management that supports and exploits alignment through mechanisms for in-use adaptation of the systemâs grammar and lexicon
Sensor-based user interface concepts for continuous, around-device and gestural interaction on mobile devices
A generally observable trend of the past 10 years is that the amount of sensors embedded in mobile devices such as smart phones and tablets is rising steadily. Arguably, the available sensors are mostly underutilized by existing mobile user interfaces. In this dissertation, we explore sensor-based user interface concepts for mobile devices with the goal of making better use of the available sensing capabilities on mobile devices as well as gaining insights on the types of sensor technologies that could be added to future mobile devices. We are particularly interested how novel sensor technologies could be used to implement novel and engaging mobile user interface concepts.
We explore three particular areas of interest for research into sensor-based user interface concepts for mobile devices: continuous interaction, around-device interaction and motion gestures.
For continuous interaction, we explore the use of dynamic state-space systems to implement user interfaces based on a constant sensor data stream. In particular, we examine zoom automation in tilt-based map scrolling interfaces. We show that although fully automatic zooming is desirable in certain situations, adding a manual override capability of the zoom level (Semi-Automatic Zooming) will increase the usability of such a system, as shown through a decrease in task completion times and improved user ratings of user study. The presented work on continuous interaction also highlights how the sensors embedded in current mobile devices can be used to support complex interaction tasks.
We go on to introduce the concept of Around-Device Interaction (ADI). By extending the interactive area of the mobile device to its entire surface and the physical volume surrounding it we aim to show how the expressivity and possibilities of mobile input can be improved this way. We derive a design space for ADI and evaluate three prototypes in this context. HoverFlow is a prototype allowing coarse hand gesture recognition around a mobile device using only a simple set of sensors. PalmSpace a prototype exploring the use of depth cameras on mobile devices to track the user's hands in direct manipulation interfaces through spatial gestures. Lastly, the iPhone Sandwich is a prototype supporting dual-sided pressure-sensitive multi-touch interaction. Through the results of user studies, we show that ADI can lead to improved usability for mobile user interfaces. Furthermore, the work on ADI contributes suggestions for the types of sensors could be incorporated in future mobile devices to expand the input capabilities of those devices.
In order to broaden the scope of uses for mobile accelerometer and gyroscope data, we conducted research on motion gesture recognition. With the aim of supporting practitioners and researchers in integrating motion gestures into their user interfaces at early development stages, we developed two motion gesture recognition algorithms, the $3 Gesture Recognizer and Protractor 3D that are easy to incorporate into existing projects, have good recognition rates and require a low amount of training data. To exemplify an application area for motion gestures, we present the results of a study on the feasibility and usability of gesture-based authentication.
With the goal of making it easier to connect meaningful functionality with gesture-based input, we developed Mayhem, a graphical end-user programming tool for users without prior programming skills. Mayhem can be used to for rapid prototyping of mobile gestural user interfaces.
The main contribution of this dissertation is the development of a number of novel user interface concepts for sensor-based interaction. They will help developers of mobile user interfaces make better use of the existing sensory capabilities of mobile devices. Furthermore, manufacturers of mobile device hardware obtain suggestions for the types of novel sensor technologies that are needed in order to expand the input capabilities of mobile devices. This allows the implementation of future mobile user interfaces with increased input capabilities, more expressiveness and improved usability
Kolab: Improvising Nomadic Tangible User Interfaces in the Workplace for Co-Located Collaboration
Tangible User Interfaces (TUIs) [Ishii 1997] offer an interface style that couples "digital information to everyday physical objects and environments" [Ishii 1997 page 2]. However this physicality may also be a limitation as the tendency to use iconic representations for tangibles can result in inflexible 'concrete and specialised objects' [Shaer 2009 page 107].
The current research investigates whether by reducing the dependence on specific tangible sets through the use of improvised tangibles we may begin to address the issue of tangible flexibility within TUIs. Improvised tangibles may be characterised by being potentially arbitrary and abstract, in that they may bear little or no resemblance to the underlying digital value. Core literature in the field (e. g. [Fitzmaurice 1996] [Ishii 2008] [Hornecker 2006] [Holmquist 1999]) suggests that a system based on improvised tangibles would suffer from impaired usability and so the research focuses on the impact on usability due to a lack of close representational significance [Ullmer 2000] during co-located collaboration.
Using a prototyping methodology a functional, shareable, TUI system was developed based on computer vision techniques using the Microsoft Kinect [Microsoft2011]. This prototype system ('Kolab') was used to explore an interaction design that supports the dynamic binding of improvised tangibles to digital values. A simple co-located collaborative task was developed using 'Kolab' and a user study was conducted to investigate the usability of the system in a collaborative context.
Within the limitations of the simple task the results of the study show that a) users appeared comfortable with improvising artefacts b) the high rate of task completion strongly suggests that a lack of close representational significance does not impair system usability and c) despite some temporary issues with users interfering with other's action an overall indication of equitable participation suggests that collaboration was not impaired by the 'Kolab' prototype
Enabling mobile microinteractions
While much attention has been paid to the usability of desktop computers, mobile com- puters are quickly becoming the dominant platform. Because mobile computers may be used in nearly any situation--including while the user is actually in motion, or performing other tasks--interfaces designed for stationary use may be inappropriate, and alternative interfaces should be considered.
In this dissertation I consider the idea of microinteractions--interactions with a device that take less than four seconds to initiate and complete. Microinteractions are desirable because they may minimize interruption; that is, they allow for a tiny burst of interaction with a device so that the user can quickly return to the task at hand.
My research concentrates on methods for applying microinteractions through wrist- based interaction. I consider two modalities for this interaction: touchscreens and motion- based gestures. In the case of touchscreens, I consider the interface implications of making touchscreen watches usable with the finger, instead of the usual stylus, and investigate users' performance with a round touchscreen. For gesture-based interaction, I present a tool, MAGIC, for designing gesture-based interactive system, and detail the evaluation of the tool.Ph.D.Committee Chair: Starner, Thad; Committee Member: Abowd, Gregory; Committee Member: Isbell, Charles; Committee Member: Landay, james; Committee Member: McIntyre, Blai
Light on horizontal interactive surfaces: Input space for tabletop computing
In the last 25 years we have witnessed the rise and growth of interactive tabletop research, both in academic and in industrial settings. The rising demand for the digital support of human activities motivated the need to bring computational power to table surfaces. In this article, we review the state of the art of tabletop computing, highlighting core aspects that frame the input space of interactive tabletops: (a) developments in hardware technologies that have caused the proliferation of interactive horizontal surfaces and (b) issues related to new classes of interaction modalities (multitouch, tangible, and touchless). A classification is presented that aims to give a detailed view of the current development of this research area and define opportunities and challenges for novel touch- and gesture-based interactions between the human and the surrounding computational environment. © 2014 ACM.This work has been funded by Integra (Amper Sistemas and CDTI, Spanish Ministry of Science and Innovation) and TIPEx (TIN2010-19859-C03-01) projects and Programa de Becas y Ayudas para la Realización de Estudios Oficiales de Måster y Doctorado en la Universidad Carlos III de Madrid, 2010
Creating mobile gesture-based interaction design patterns for older adults : a study of tap and swipe gestures with portuguese seniors
Tese de mestrado. Multimédia. Faculdade de Engenharia. Universidade do Porto. 201
On intelligible multimodal visual analysis
Analyzing data becomes an important skill in a more and more digital world. Yet, many users are facing knowledge barriers preventing them to independently conduct their data analysis. To tear down some of these barriers, multimodal interaction for visual analysis has been proposed. Multimodal interaction through speech and touch enables not only experts, but also novice users to effortlessly interact with such kind of technology. However, current approaches do not take the user differences into account. In fact, whether visual analysis is intelligible ultimately depends on the user.
In order to close this research gap, this dissertation explores how multimodal visual analysis can be personalized. To do so, it takes a holistic view. First, an intelligible task space of visual analysis tasks is defined by considering personalization potentials. This task space provides an initial basis for understanding how effective personalization in visual analysis can be approached. Second, empirical analyses on speech commands in visual analysis as well as used visualizations from scientific publications further reveal patterns and structures. These behavior-indicated findings help to better understand expectations towards multimodal visual analysis. Third, a technical prototype is designed considering the previous findings. Enriching the visual analysis by a persistent dialogue and a transparency of the underlying computations, conducted user studies show not only advantages, but address the relevance of considering the userâs characteristics. Finally, both communications channels â visualizations and dialogue â are personalized. Leveraging linguistic theory and reinforcement learning, the results highlight a positive effect of adjusting to the user. Especially when the userâs knowledge is exceeded, personalizations helps to improve the user experience.
Overall, this dissertations confirms not only the importance of considering the userâs characteristics in multimodal visual analysis, but also provides insights on how an intelligible analysis can be achieved. By understanding the use of input modalities, a system can focus only on the userâs needs. By understanding preferences on the output modalities, the system can better adapt to the user. Combining both directions imporves user experience and contributes towards an intelligible multimodal visual analysis
- âŠ