767 research outputs found

    Assessing the effectiveness of direct gesture interaction for a safety critical maritime application

    Get PDF
    Multi-touch interaction, in particular multi-touch gesture interaction, is widely believed to give a more natural interaction style. We investigated the utility of multi-touch interaction in the safety critical domain of maritime dynamic positioning (DP) vessels. We conducted initial paper prototyping with domain experts to gain an insight into natural gestures; we then conducted observational studies aboard a DP vessel during operational duties and two rounds of formal evaluation of prototypes - the second on a motion platform ship simulator. Despite following a careful user-centred design process, the final results show that traditional touch-screen button and menu interaction was quicker and less erroneous than gestures. Furthermore, the moving environment accentuated this difference and we observed initial use problems and handedness asymmetries on some multi-touch gestures. On the positive side, our results showed that users were able to suspend gestural interaction more naturally, thus improving situational awareness

    ARoMA-V2: Assistive Robotic Manipulation Assistance with Computer Vision and Voice Recognition

    Get PDF
    We have designed and developed a handy alternative control method, called ARoMA-V2 (Assistive Robotic Manipulation Assistance with computer Vision and Voice recognition), for controlling assistive robotic manipulators based on computer vision and user voice recognition. Potential advantages of ARoMA-V2 over the traditional alternatives include: providing completely hands-free operation; helping a user to maintain a better working posture; Allowing the user to work in postures that otherwise would not be effective for operating an assistive robotic manipulator (i.e., reclined in a chair or bed); supporting task specific commands; providing the user with different levels of intelligent autonomous manipulation assistances; giving the user the feeling that he or she is still in control at any moment; and being compatible with different types of new and existing assistive robotic manipulators

    Designing inclusive products for everyday environments: the effects of everyday cold temperatures on older adults' dexterity

    Get PDF
    This paper focuses on the effect an everyday cold temperature (5°C) can have on older adults (+65 years) dexterous capabilities and the implications for design. Fine finger capability, power and pinch grip were measured using objective performance measures. Ability to perform tasks using a mobile phone, stylus, touch screen and garden secateurs were also measured. All measures were performed in a climatic cold chamber regulated at 5°C and in a thermo-neutral environment regulated between 19°C-24°C. Participants were exposed to the cold for a maximum of 40 minutes. Results from the study showed that older adult’s fine finger dexterity, ability to pick-up and place objects and ability to use a mobile phone was significantly (p<0.05) affected by an everyday cold temperature of 5°C when compared to performance in the thermo-neutral environment. However, power and pinch grip strength and ability to use the gardening secateurs was not significantly affected by the cold. Based these findings, the following guidance is offered to designers developing products that are likely to be used outside in an everyday cold environment: 1) Minimise the number of product interactions that require precise fine finger movements; 2) Try to avoid small controls that have to be pressed in a sequence; 3) Maximise the number of product interactions that can be operated through either exerting a gripping action (power or pinch grip) or by gross hand and arm movements

    Interaction techniques for older adults using touchscreen devices : a literature review

    Get PDF
    International audienceSeveral studies investigated different interaction techniques and input devices for older adults using touchscreen. This literature review analyses the population involved, the kind of tasks that were executed, the apparatus, the input techniques, the provided feedback, the collected data and author's findings and their recommendations. As conclusion, this review shows that age-related changes, previous experience with technologies, characteristics of handheld devices and use situations need to be studied

    Comparison of Navigation Techniques for Large Digital Images

    Get PDF
    Medical images are examined on computer screens in a variety of contexts. Frequently, these images are larger than computer screens, and computer applications support different paradigms for user navigation of large images. The paper reports on a systematic investigation of what interaction techniques are the most effective for navigating images larger than the screen size for the purpose of detecting small image features. An experiment compares five different types of geometrically zoomable interaction techniques, each at two speeds (fast and slow update rates) for the task of finding a known feature in the image. There were statistically significant performance differences between several groupings of the techniques. The fast versions of the ArrowKey, Pointer, and ScrollBar performed the best. In general, techniques that enable both intuitive and systematic searching performed the best at the fast speed, while techniques that minimize the number of interactions with the image were more effective at the slow speed. Additionally, based on a postexperiment questionnaire and qualitative comparison, users expressed a clear preference for the Pointer technique, which allowed them to more freely and naturally interact with the image

    Interacting "Through the Display"

    Get PDF
    The increasing availability of displays at lower costs has led to a proliferation of such in our everyday lives. Additionally, mobile devices are ready to hand and have been proposed as interaction devices for external screens. However, only their input mechanism was taken into account without considering three additional factors in environments hosting several displays: first, a connection needs to be established to the desired target display (modality). Second, screens in the environment may be re-arranged (flexibility). And third, displays may be out of the user’s reach (distance). In our research we aim to overcome the problems resulting from these characteristics. The overall goal is a new interaction model that allows for (1) a non-modal connection mechanism for impromptu use on various displays in the environment, (2) interaction on and across displays in highly flexible environments, and (3) interacting at variable distances. In this work we propose a new interaction model called through the display interaction which enables users to interact with remote content on their personal device in an absolute and direct fashion. To gain a better understanding of the effects of the additional characteristics, we implemented two prototypes each of which investigates a different distance to the target display: LucidDisplay allows users to place their mobile device directly on top of a larger external screen. MobileVue on the other hand enables users to interact with an external screen at a distance. In each of these prototypes we analyzed their effects on the remaining two criteria – namely the modality of the connection mechanism as well as the flexibility of the environment. With the findings gained in this initial phase we designed Shoot & Copy, a system that allows the detection of screens purely based on their visual content. Users aim their personal device’s camera at the target display which then appears in live video shown in the viewfinder. To select an item, users take a picture which is analyzed to determine the targeted region. We further extended this approach to multiple displays by using a centralized component serving as gateway to the display environment. In Tap & Drop we refined this prototype to support real-time feedback. Instead of taking pictures, users can now aim their mobile device at the display resulting and start interacting immediately. In doing so, we broke the rigid sequential interaction of content selection and content manipulation. Both prototypes allow for (1) connections in a non-modal way (i.e., aim at the display and start interacting with it) from the user’s point of view and (2) fully flexible environments (i.e., the mobile device tracks itself with respect to displays in the environment). However, the wide-angle lenses and thus greater field of views of current mobile devices still do not allow for variable distances. In Touch Projector, we overcome this limitation by introducing zooming in combination with temporarily freezing the video image. Based on our extensions to taxonomy of mobile device interaction on external displays, we created a refined model of interacting through the display for mobile use. It enables users to interact impromptu without explicitly establishing a connection to the target display (non-modal). As the mobile device tracks itself with respect to displays in the environment, the model further allows for full flexibility of the environment (i.e., displays can be re-arranged without affecting on the interaction). And above all, users can interact with external displays regardless of their actual size at variable distances without any loss of accuracy.Die steigende VerfĂŒgbarkeit von Bildschirmen hat zu deren Verbreitung in unserem Alltag gefĂŒhrt. Ferner sind mobile GerĂ€te immer griffbereit und wurden bereits als InteraktionsgerĂ€te fĂŒr zusĂ€tzliche Bildschirme vorgeschlagen. Es wurden jedoch nur Eingabemechanismen berĂŒcksichtigt ohne nĂ€her auf drei weitere Faktoren in Umgebungen mit mehreren Bildschirmen einzugehen: (1) Beide GerĂ€te mĂŒssen verbunden werden (ModalitĂ€t). (2) Bildschirme können in solchen Umgebungen umgeordnet werden (FlexibilitĂ€t). (3) Monitore können außer Reichweite sein (Distanz). Wir streben an, die Probleme, die durch diese Eigenschaften auftreten, zu lösen. Das ĂŒbergeordnete Ziel ist ein Interaktionsmodell, das einen nicht-modalen Verbindungsaufbau fĂŒr spontane Verwendung von Bildschirmen in solchen Umgebungen, (2) Interaktion auf und zwischen Bildschirmen in flexiblen Umgebungen, und (3) Interaktionen in variablen Distanzen erlaubt. Wir stellen ein Modell (Interaktion durch den Bildschirm) vor, mit dem Benutzer mit entfernten Inhalten in direkter und absoluter Weise auf ihrem MobilgerĂ€t interagieren können. Um die Effekte der hinzugefĂŒgten Charakteristiken besser zu verstehen, haben wir zwei Prototypen fĂŒr unterschiedliche Distanzen implementiert: LucidDisplay erlaubt Benutzern ihr mobiles GerĂ€t auf einen grĂ¶ĂŸeren, sekundĂ€ren Bildschirm zu legen. GegensĂ€tzlich dazu ermöglicht MobileVue die Interaktion mit einem zusĂ€tzlichen Monitor in einer gewissen Entfernung. In beiden Prototypen haben wir dann die Effekte der verbleibenden zwei Kriterien (d.h. ModalitĂ€t des Verbindungsaufbaus und FlexibilitĂ€t der Umgebung) analysiert. Mit den in dieser ersten Phase erhaltenen Ergebnissen haben wir Shoot & Copy entworfen. Dieser Prototyp erlaubt die Erkennung von Bildschirmen einzig ĂŒber deren visuellen Inhalt. Benutzer zeigen mit der Kamera ihres MobilgerĂ€ts auf einen Bildschirm dessen Inhalt dann in Form von Video im Sucher dargestellt wird. Durch die Aufnahme eines Bildes (und der darauf folgenden Analyse) wird Inhalt ausgewĂ€hlt. Wir haben dieses Konzept zudem auf mehrere Bildschirme erweitert, indem wir eine zentrale Instanz verwendet haben, die als Schnittstelle zur Umgebung agiert. Mit Tap & Drop haben wir den Prototyp verfeinert, um Echtzeit-Feedback zu ermöglichen. Anstelle der Bildaufnahme können Benutzer nun ihr mobiles GerĂ€t auf den Bildschirm richten und sofort interagieren. Dadurch haben wir die strikt sequentielle Interaktion (Inhalt auswĂ€hlen und Inhalt manipulieren) aufgebrochen. Beide Prototypen erlauben bereits nicht-modale Verbindungsmechanismen in flexiblen Umgebungen. Die in heutigen MobilgerĂ€ten verwendeten Weitwinkel-Objektive erlauben jedoch nach wie vor keine variablen Distanzen. Mit Touch Projector beseitigen wir diese EinschrĂ€nkung, indem wir Zoomen in Kombination mit einer vorĂŒbergehenden Pausierung des Videos im Sucher einfĂŒgen. Basierend auf den Erweiterungen der Klassifizierung von Interaktionen mit zusĂ€tzlichen Bildschirmen durch mobile GerĂ€te haben wir ein verbessertes Modell (Interaktion durch den Bildschirm) erstellt. Es erlaubt Benutzern spontan zu interagieren, ohne explizit eine Verbindung zum zweiten Bildschirm herstellen zu mĂŒssen (nicht-modal). Da das mobile GerĂ€t seinen rĂ€umlichen Bezug zu allen Bildschirmen selbst bestimmt, erlaubt unser Modell zusĂ€tzlich volle FlexibilitĂ€t in solchen Umgebungen. DarĂŒber hinaus können Benutzer mit zusĂ€tzlichen Bildschirmen (unabhĂ€ngig von deren GrĂ¶ĂŸe) in variablen Entfernungen interagieren

    Visual based finger interactions for mobile phones

    Get PDF
    Vision based technology such as motion detection has long been limited to the domain of powerful processor intensive systems such as desktop PCs and specialist hardware solutions. With the advent of much faster mobile phone processors and memory, a plethora of feature rich software and hardware is being deployed onto the mobile platform, most notably onto high powered devices called smart phones. Interaction interfaces such as touchscreens allow for improved usability but obscure the phone’s screen. Since the majority of smart phones are equipped with cameras, it has become feasible to combine their powerful processors, large memory capacity and the camera to support new ways of interacting with the phone which do not obscure the screen. However, it is not clear whether or not these processor intensive visual interactions can in fact be run at an acceptable speed on current mobile handsets or whether they will offer the user a better experience than the current number pad and direction keys present on the majority of mobile phones. A vision based finger interaction technique is proposed which uses the back of device camera to track the user’s finger. This allows the user to interact with the mobile phone with mouse based movements, gestures and steering based interactions. A simple colour thresholding algorithm was implemented in Java, Python and C++. Various benchmarks and tests conducted on a Nokia N95 smart phone revealed that on current hardware and with current programming environments only native C++ yields results plausible for real time interactions (a key requirement for vision based interactions). It is also shown that different lighting levels and background environments affects the accuracy of the system with background and finger contrast playing a large role. Finally a user study was conducted to ascertain the overall user’s satisfaction between keypad interactions and the finger interaction techniques concluding that the new finger interaction technique is well suited to steering based interactions and in time, mouse style movements. Simple navigation is better suited to the directional keypad

    HCI models, theories, and frameworks: Toward a multidisciplinary science

    Get PDF
    Motivation The movement of body and limbs is inescapable in human-computer interaction (HCI). Whether browsing the web or intensively entering and editing text in a document, our arms, wrists, and fingers are at work on the keyboard, mouse, and desktop. Our head, neck, and eyes move about attending to feedback marking our progress. This chapter is motivated by the need to match the movement limits, capabilities, and potential of humans with input devices and interaction techniques on computing systems. Our focus is on models of human movement relevant to human-computer interaction. Some of the models discussed emerged from basic research in experimental psychology, whereas others emerged from, and were motivated by, the specific need in HCI to model the interaction between users and physical devices, such as mice and keyboards. As much as we focus on specific models of human movement and user interaction with devices, this chapter is also about models in general. We will say a lot about the nature of models, what they are, and why they are important tools for the research and development of humancomputer interfaces. Overview: Models and Modeling By its very nature, a model is a simplification of reality. However a model is useful only if it helps in designing, evaluating, or otherwise providing a basis for understanding the behaviour of a complex artifact such as a computer system. It is convenient to think of models as lying in a continuum, with analogy and metaphor at one end and mathematical equations at the other. Most models lie somewhere in-between. Toward the metaphoric end are descriptive models; toward the mathematical end are predictive models. These two categories are our particular focus in this chapter, and we shall visit a few examples of each. Two models will be presented in detail and in case studies: Fitts&apos; model of the information processing capability of the human motor system and Guiard&apos;s model of bimanual control. Fitts&apos; model is a mathematical expression emerging from the rigors of probability theory. It is a predictive model at the mathematical end of the continuum, to be sure, yet when applied as a model of human movement it has characteristics of a metaphor. Guiard&apos;s model emerged from a detailed analysis of how human&apos;s use their hands in everyday tasks, such as writing, drawing, playing a sport, or manipulating objects. It is a descriptive model, lacking in mathematical rigor but rich in expressive power

    Survey and Systematization of Secure Device Pairing

    Full text link
    Secure Device Pairing (SDP) schemes have been developed to facilitate secure communications among smart devices, both personal mobile devices and Internet of Things (IoT) devices. Comparison and assessment of SDP schemes is troublesome, because each scheme makes different assumptions about out-of-band channels and adversary models, and are driven by their particular use-cases. A conceptual model that facilitates meaningful comparison among SDP schemes is missing. We provide such a model. In this article, we survey and analyze a wide range of SDP schemes that are described in the literature, including a number that have been adopted as standards. A system model and consistent terminology for SDP schemes are built on the foundation of this survey, which are then used to classify existing SDP schemes into a taxonomy that, for the first time, enables their meaningful comparison and analysis.The existing SDP schemes are analyzed using this model, revealing common systemic security weaknesses among the surveyed SDP schemes that should become priority areas for future SDP research, such as improving the integration of privacy requirements into the design of SDP schemes. Our results allow SDP scheme designers to create schemes that are more easily comparable with one another, and to assist the prevention of persisting the weaknesses common to the current generation of SDP schemes.Comment: 34 pages, 5 figures, 3 tables, accepted at IEEE Communications Surveys & Tutorials 2017 (Volume: PP, Issue: 99

    DEVELOPMENT AND ASSESSMENT OF ADVANCED ASSISTIVE ROBOTIC MANIPULATORS USER INTERFACES

    Get PDF
    Text BoxAssistive Robotic Manipulators (ARM) have shown improvement in self-care and increased independence among people with severe upper extremity disabilities. With an ARM mounted on the side of an electric powered wheelchair, an ARM may provide manipulation assistance, such as picking up object, eating, drinking, dressing, reaching out, or opening doors. However, existing assessment tools are inconsistent between studies, time consuming, and unclear in clinical effectiveness. Therefore, in this research, we have developed an ADL task board evaluation tool that provides standardized, efficient, and reliable assessment of ARM performance. Among powered wheelchair users and able-bodied controls using two commercial ARM user interfaces – joystick and keypad, we found that there were statistical differences between both user interface performances, but no statistical difference was found in the cognitive loading. The ADL task board demonstrated highly correlated performance with an existing functional assessment tool, Wolf Motor Function Test. Through this study, we have also identified barriers and limits in current commercial user interfaces and developed smartphone and assistive sliding-autonomy user interfaces that yields improved performance. Testing results from our smartphone manual interface revealed statistically faster performance. The assistive sliding-autonomy interface helped seamlessly correct the error seen with autonomous functions. The ADL task performance evaluation tool may help clinicians and researchers better access ARM user interfaces and evaluated the efficacy of customized user interfaces to improve performance. The smartphone manual interface demonstrated improved performance and the sliding-autonomy framework showed enhanced success with tasks without recalculating path planning and recognition
    • 

    corecore