11,284 research outputs found
Stabilising touch interactions in cockpits, aerospace, and vibrating environments
© Springer International Publishing AG, part of Springer Nature 2018. Incorporating touch screen interaction into cockpit flight systems is increasingly gaining traction given its several potential advantages to design as well as usability to pilots. However, perturbations to the user input are prevalent in such environments due to vibrations, turbulence and high accelerations. This poses particular challenges for interacting with displays in the cockpit, for example, accidental activation during turbulence or high levels of distraction from the primary task of airplane control to accomplish selection tasks. On the other hand, predictive displays have emerged as a solution to minimize the effort as well as cognitive, visual and physical workload associated with using in-vehicle displays under perturbations, induced by road and driving conditions. This technology employs gesture tracking in 3D and potentially eye-gaze as well as other sensory data to substantially facilitate the acquisition (pointing and selection) of an interface component by predicting the item the user intents to select on the display, early in the movements towards the screen. A key aspect is utilising principled Bayesian modelling to incorporate and treat the present perturbation, thus, it is a software-based solution that showed promising results when applied to automotive applications. This paper explores the potential of applying this technology to applications in aerospace and vibrating environments in general and presents design recommendations for such an approach to enhance interactions accuracy as well as safety
Challenges of Multi-Factor Authentication for Securing Advanced IoT (A-IoT) Applications
The unprecedented proliferation of smart devices together with novel
communication, computing, and control technologies have paved the way for the
Advanced Internet of Things~(A-IoT). This development involves new categories
of capable devices, such as high-end wearables, smart vehicles, and consumer
drones aiming to enable efficient and collaborative utilization within the
Smart City paradigm. While massive deployments of these objects may enrich
people's lives, unauthorized access to the said equipment is potentially
dangerous. Hence, highly-secure human authentication mechanisms have to be
designed. At the same time, human beings desire comfortable interaction with
their owned devices on a daily basis, thus demanding the authentication
procedures to be seamless and user-friendly, mindful of the contemporary urban
dynamics. In response to these unique challenges, this work advocates for the
adoption of multi-factor authentication for A-IoT, such that multiple
heterogeneous methods - both well-established and emerging - are combined
intelligently to grant or deny access reliably. We thus discuss the pros and
cons of various solutions as well as introduce tools to combine the
authentication factors, with an emphasis on challenging Smart City
environments. We finally outline the open questions to shape future research
efforts in this emerging field.Comment: 7 pages, 4 figures, 2 tables. The work has been accepted for
publication in IEEE Network, 2019. Copyright may be transferred without
notice, after which this version may no longer be accessibl
Nomadic input on mobile devices: the influence of touch input technique and walking speed on performance and offset modeling
In everyday life people use their mobile phones on-the-go with different walking speeds and with different touch input techniques. Unfortunately, much of the published research in mobile interaction does not quantify the influence of these variables. In this paper, we analyze the influence of walking speed, gait pattern and input techniques on commonly used performance parameters like error rate, accuracy and tapping speed, and we compare the results to the static condition. We examine the influence of these factors on the machine learned offset model used to correct user input and we make design recommendations. The results show that all performance parameters degraded when the subject started to move, for all input techniques. Index finger pointing techniques demonstrated overall better performance compared to thumb-pointing techniques. The influence of gait phase on tap event likelihood and accuracy was demonstrated for all input techniques and all walking speeds. Finally, it was shown that the offset model built on static data did not perform as well as models inferred from dynamic data, which indicates the speed-specific nature of the models. Also, models identified using specific input techniques did not perform well when tested in other conditions, demonstrating the limited validity of offset models to a particular input technique. The model was therefore calibrated using data recorded with the appropriate input technique, at 75% of preferred walking speed, which is the speed to which users spontaneously slow down when they use a mobile device and which presents a tradeoff between accuracy and usability. This led to an increase in accuracy compared to models built on static data. The error rate was reduced between 0.05% and 5.3% for landscape-based methods and between 5.3% and 11.9% for portrait-based methods
Feel the Noise: Mid-Air Ultrasound Haptics as a Novel Human-Vehicle Interaction Paradigm
Focussed ultrasound can be used to create the sensation of touch in mid-air. Combined with gestures, this can provide haptic feedback to guide users, thereby overcoming the lack of agency associated with pure gestural interfaces, and reducing the need for vision â it is therefore particularly apropos of the driving domain. In a counter-balanced 2Ă2 driving simulator study, a traditional in-vehicle touchscreen was compared with a virtual mid-air gestural interface, both with and without ultrasound haptics. Forty-eight experienced drivers (28 male, 20 female) undertook representative in-vehicle tasks â discrete target selections and continuous slider-bar manipulations â whilst driving. Results show that haptifying gestures with ultrasound was particularly effective in reducing visual demand (number of long glances and mean off-road glance time), and increasing performance (shortest interaction times, highest number of correct responses and least âovershootsâ) associated with continuous tasks. In contrast, for discrete, target-selections, the touchscreen enabled the highest accuracy and quickest responses, particularly when combined with haptic feedback to guide interactions, although this also increased visual demand. Subjectively, the gesture interfaces invited higher ratings of arousal compared to the more familiar touch-surface technology, and participants indicated the lowest levels of workload (highest performance, lowest frustration) associated with the gesture-haptics interface. In addition, gestures were preferred by participants for continuous tasks. The study shows practical utility and clear potential for the use of haptified gestures in the automotive domain
Designing touch screen user interfaces for future flight deck operations
Many interactional issues with Flight Management Systems (FMS) in modern flight decks have been reported. Avionics designers are seeking for ways to reduce cognitive load of pilots with the aim to reduce the potential for human error. Academic research showed that touch screen interfaces reduce cognitive effort and provide an intuitive way of interaction. A new way of interaction to manipulate radio frequencies of avionics systems is presented in this paper. A usability experiment simulating departures and approaches to airports was used to evaluate the interface and compare it with the current system (FMS). In addition, interviews with pilots were conducted to find out their personal impressions and to reveal problem areas of the interface. Analyses of task completion time and error rates showed that the touch interface is significantly faster and less prone to user input errors than the conventional input method (via physical or virtual keypad). Potential problem areas were identified and an improved interface is suggested
The cockpit for the 21st century
Interactive surfaces are a growing trend in many domains. As one possible manifestation of Mark Weiserâs vision of ubiquitous and disappearing computers in everywhere objects, we see touchsensitive screens in many kinds of devices, such as smartphones, tablet computers and interactive tabletops. More advanced concepts of these have been an active research topic for many years. This has also influenced automotive cockpit development: concept cars and recent market releases show integrated touchscreens, growing in size. To meet the increasing information and interaction needs, interactive surfaces offer context-dependent functionality in combination with a direct input paradigm.
However, interfaces in the car need to be operable while driving. Distraction, especially visual distraction from the driving task, can lead to critical situations if the sum of attentional demand emerging from both primary and secondary task overextends the available resources. So far, a touchscreen requires a lot of visual attention since its flat surface does not provide any haptic feedback. There have been approaches to make direct touch interaction accessible while driving for simple tasks. Outside the automotive domain, for example in office environments, concepts for sophisticated handling of large displays have already been introduced. Moreover, technological advances lead to new characteristics for interactive surfaces by enabling arbitrary surface shapes.
In cars, two main characteristics for upcoming interactive surfaces are largeness and shape. On the one hand, spatial extension is not only increasing through larger displays, but also by taking objects in the surrounding into account for interaction. On the other hand, the flatness inherent in current screens can be overcome by upcoming technologies, and interactive surfaces can therefore provide haptically distinguishable surfaces. This thesis describes the systematic exploration of large and shaped interactive surfaces and analyzes their potential for interaction while driving. Therefore, different prototypes for each characteristic have been developed and evaluated in test settings suitable for their maturity level. Those prototypes were used to obtain subjective user feedback and objective data, to investigate effects on driving and glance behavior as well as usability and user experience.
As a contribution, this thesis provides an analysis of the development of interactive surfaces in the car. Two characteristics, largeness and shape, are identified that can improve the interaction compared to conventional touchscreens. The presented studies show that large interactive surfaces can provide new and improved ways of interaction both in driver-only and driver-passenger situations. Furthermore, studies indicate a positive effect on visual distraction when additional static haptic feedback is provided by shaped interactive surfaces. Overall, various, non-exclusively applicable, interaction concepts prove the potential of interactive surfaces for the use in automotive cockpits, which is expected to be beneficial also in further environments where visual attention needs to be focused on additional tasks.Der Einsatz von interaktiven OberflĂ€chen weitet sich mehr und mehr auf die unterschiedlichsten Lebensbereiche aus. Damit sind sie eine mögliche AusprĂ€gung von Mark Weisers Vision der allgegenwĂ€rtigen Computer, die aus unserer direkten Wahrnehmung verschwinden. Bei einer Vielzahl von technischen GerĂ€ten des tĂ€glichen Lebens, wie Smartphones, Tablets oder interaktiven Tischen, sind berĂŒhrungsempfindliche OberflĂ€chen bereits heute in Benutzung. Schon seit vielen Jahren arbeiten Forscher an einer Weiterentwicklung der Technik, um ihre Vorteile auch in anderen Bereichen, wie beispielsweise der Interaktion zwischen Mensch und Automobil, nutzbar zu machen. Und das mit Erfolg: Interaktive BenutzeroberflĂ€chen werden mittlerweile serienmĂ€Ăig in vielen Fahrzeugen eingesetzt. Der Einbau von immer gröĂeren, in das Cockpit integrierten Touchscreens in Konzeptfahrzeuge zeigt, dass sich diese Entwicklung weiter in vollem Gange befindet. Interaktive OberflĂ€chen ermöglichen das flexible Anzeigen von kontextsensitiven Inhalten und machen eine direkte Interaktion mit den Bildschirminhalten möglich. Auf diese Weise erfĂŒllen sie die sich wandelnden Informations- und InteraktionsbedĂŒrfnisse in besonderem MaĂe.
Beim Einsatz von Bedienschnittstellen im Fahrzeug ist die gefahrlose Benutzbarkeit wĂ€hrend der Fahrt von besonderer Bedeutung. Insbesondere visuelle Ablenkung von der Fahraufgabe kann zu kritischen Situationen fĂŒhren, wenn PrimĂ€r- und SekundĂ€raufgaben mehr als die insgesamt verfĂŒgbare Aufmerksamkeit des Fahrers beanspruchen. Herkömmliche Touchscreens stellen dem Fahrer bisher lediglich eine flache OberflĂ€che bereit, die keinerlei haptische RĂŒckmeldung bietet, weshalb deren Bedienung besonders viel visuelle Aufmerksamkeit erfordert. Verschiedene AnsĂ€tze ermöglichen dem Fahrer, direkte Touchinteraktion fĂŒr einfache Aufgaben wĂ€hrend der Fahrt zu nutzen. AuĂerhalb der Automobilindustrie, zum Beispiel fĂŒr BĂŒroarbeitsplĂ€tze, wurden bereits verschiedene Konzepte fĂŒr eine komplexere Bedienung groĂer Bildschirme vorgestellt. DarĂŒber hinaus fĂŒhrt der technologische Fortschritt zu neuen möglichen AusprĂ€gungen interaktiver OberflĂ€chen und erlaubt, diese beliebig zu formen.
FĂŒr die nĂ€chste Generation von interaktiven OberflĂ€chen im Fahrzeug wird vor allem an der Modifikation der Kategorien GröĂe und Form gearbeitet. Die Bedienschnittstelle wird nicht nur durch gröĂere Bildschirme erweitert, sondern auch dadurch, dass Objekte wie Dekorleisten in die Interaktion einbezogen werden können. Andererseits heben aktuelle Technologieentwicklungen die Restriktion auf flache OberflĂ€chen auf, so dass Touchscreens kĂŒnftig ertastbare Strukturen aufweisen können. Diese Dissertation beschreibt die systematische Untersuchung groĂer und nicht-flacher interaktiver OberflĂ€chen und analysiert ihr Potential fĂŒr die Interaktion wĂ€hrend der Fahrt. Dazu wurden fĂŒr jede Charakteristik verschiedene Prototypen entwickelt und in Testumgebungen entsprechend ihres Reifegrads evaluiert. Auf diese Weise konnten subjektives Nutzerfeedback und objektive Daten erhoben, und die Effekte auf Fahr- und Blickverhalten sowie Nutzbarkeit untersucht werden.
Diese Dissertation leistet den Beitrag einer Analyse der Entwicklung von interaktiven OberflĂ€chen im Automobilbereich. Weiterhin werden die Aspekte GröĂe und Form untersucht, um mit ihrer Hilfe die Interaktion im Vergleich zu herkömmlichen Touchscreens zu verbessern. Die durchgefĂŒhrten Studien belegen, dass groĂe FlĂ€chen neue und verbesserte Bedienmöglichkeiten bieten können. AuĂerdem zeigt sich ein positiver Effekt auf die visuelle Ablenkung, wenn zusĂ€tzliches statisches, haptisches Feedback durch nicht-flache OberflĂ€chen bereitgestellt wird. Zusammenfassend zeigen verschiedene, untereinander kombinierbare Interaktionskonzepte das Potential interaktiver OberflĂ€chen fĂŒr den automotiven Einsatz. Zudem können die Ergebnisse auch in anderen Bereichen Anwendung finden, in denen visuelle Aufmerksamkeit fĂŒr andere Aufgaben benötigt wird
A Single-Handed Partial Zooming Technique for Touch-Screen Mobile Devices
Despite its ubiquitous use, the pinch zooming technique is not effective for one-handed interaction. We propose ContextZoom, a novel technique for single-handed zooming on touch-screen mobile devices. It allows users to specify any place on a device screen as the zooming center to ensure that the intended zooming target is always visible on the screen after zooming. ContextZoom supports zooming in/out a portion of a viewport, and provides a quick switch between the partial and whole viewports. We conducted an empirical evaluation of ContextZoom through a controlled lab experiment to compare ContextZoom and the Google mapsâ single-handed zooming technique. Results show that ContextZoom outperforms the latter in task completion time and the number of discrete actions taken. Participants also reported higher levels of perceived effectiveness and overall satisfaction with ContextZoom than with the Google mapsâ single-handed zooming technique, as well as a similar level of perceived ease of use
- âŠ