103 research outputs found

    Predicting mid-air gestural interaction with public displays based on audience behaviour

    Get PDF
    © 2020 Elsevier Ltd. All rights reserved. This manuscript is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Licence http://creativecommons.org/licenses/by-nc-nd/4.0/.Knowledge about the expected interaction duration and expected distance from which users will interact with public displays can be useful in many ways. For example, knowing upfront that a certain setup will lead to shorter interactions can nudge space owners to alter the setup. If a system can predict that incoming users will interact at a long distance for a short amount of time, it can accordingly show shorter versions of content (e.g., videos/advertisements) and employ at-a-distance interaction modalities (e.g., mid-air gestures). In this work, we propose a method to build models for predicting users’ interaction duration and distance in public display environments, focusing on mid-air gestural interactive displays. First, we report our findings from a field study showing that multiple variables, such as audience size and behaviour, significantly influence interaction duration and distance. We then train predictor models using contextual data, based on the same variables. By applying our method to a mid-air gestural interactive public display deployment, we build a model that predicts interaction duration with an average error of about 8 s, and interaction distance with an average error of about 35 cm. We discuss how researchers and practitioners can use our work to build their own predictor models, and how they can use them to optimise their deployment.Peer reviewe

    Understanding 3D mid-air hand gestures with interactive surfaces and displays: a systematic literature review

    Get PDF
    3D gesture based systems are becoming ubiquitous and there are many mid-air hand gestures that exist for interacting with digital surfaces and displays. There is no well defined gesture set for 3D mid-air hand gestures which makes it difficult to develop applications that have consistent gestures. To understand what gestures exist we conducted the first comprehensive systematic literature review on mid-air hand gestures following existing research methods. The results of the review identified 65 papers where the mid-air hand gestures supported tasks for selection, navigation, and manipulation. We also classified the gestures according to a gesture classification scheme and identified how these gestures have been empirically evaluated. The results of the review provide a richer understanding of what mid-air hand gestures have been designed, implemented, and evaluated in the literature which can help developers design better user experiences for digital interactive surfaces and displays

    Understanding 3D mid-air hand gestures with interactive surfaces and displays: a systematic literature review

    Get PDF
    3D gesture based systems are becoming ubiquitous and there are many mid-air hand gestures that exist for interacting with digital surfaces and displays. There is no well defined gesture set for 3D mid-air hand gestures which makes it difficult to develop applications that have consistent gestures. To understand what gestures exist we conducted the first comprehensive systematic literature review on mid-air hand gestures following existing research methods. The results of the review identified 65 papers where the mid-air hand gestures supported tasks for selection, navigation, and manipulation. We also classified the gestures according to a gesture classification scheme and identified how these gestures have been empirically evaluated. The results of the review provide a richer understanding of what mid-air hand gestures have been designed, implemented, and evaluated in the literature which can help developers design better user experiences for digital interactive surfaces and displays

    Viewing versus Experiencing in Adopting Somatosensory Technology for Smart Applications

    Get PDF
    Emerging somatosensory technology offers unprecedented opportunities for researchers and industrial practitioners to design a touchless smart home system. However, existing touchless smart home systems often fail to attract a satisfying level of acceptance among home owners. The experience users have with the touchless system is key to making somatosensory technology a pervasive computing home application, yet little research has been conducted to assess the influence of direct and indirect experience on user’s behavioral intention to use somatosensory technology. To address this research gap, this paper set up an experimental design to investigate the influence of direct and indirect experience in user technology acceptance. Using an in-house developed touchless system, two experimental studies (i.e., video observation versus product trial) were conducted with sixty-two participants to investigate whether the user experience has an impact on the adoption decision. Our findings indicate that direct experience has an impact on a user’s acceptance of somatosensory technology. We found a significant difference in the relationships between perceived complexity and usage intentions. Perceived complexity was a significant predictor of an individual’s behavioral intention to use the touchless system after video observation, while its relationship to usage intention was insignificant after the user had direct experience with touchless system. Our study reveals an important implication for somatosensory technology marketers, in which product trial (direct experience) engenders more reliable inferences than does exposure to video demonstration (indirect experience). Based on this, companies should devise marketing programme involving direct experience (e.g., product trial and showroom visit) to promote new somatosensory-enabled smart home systems. The results of the study also demonstrate that user experience in research design may influence the results of the Technology Acceptance Model (TAM) studies. Available at: https://aisel.aisnet.org/pajais/vol6/iss3/2

    A gaze-contingent framework for perceptually-enabled applications in healthcare

    Get PDF
    Patient safety and quality of care remain the focus of the smart operating room of the future. Some of the most influential factors with a detrimental effect are related to suboptimal communication among the staff, poor flow of information, staff workload and fatigue, ergonomics and sterility in the operating room. While technological developments constantly transform the operating room layout and the interaction between surgical staff and machinery, a vast array of opportunities arise for the design of systems and approaches, that can enhance patient safety and improve workflow and efficiency. The aim of this research is to develop a real-time gaze-contingent framework towards a "smart" operating suite, that will enhance operator's ergonomics by allowing perceptually-enabled, touchless and natural interaction with the environment. The main feature of the proposed framework is the ability to acquire and utilise the plethora of information provided by the human visual system to allow touchless interaction with medical devices in the operating room. In this thesis, a gaze-guided robotic scrub nurse, a gaze-controlled robotised flexible endoscope and a gaze-guided assistive robotic system are proposed. Firstly, the gaze-guided robotic scrub nurse is presented; surgical teams performed a simulated surgical task with the assistance of a robot scrub nurse, which complements the human scrub nurse in delivery of surgical instruments, following gaze selection by the surgeon. Then, the gaze-controlled robotised flexible endoscope is introduced; experienced endoscopists and novice users performed a simulated examination of the upper gastrointestinal tract using predominately their natural gaze. Finally, a gaze-guided assistive robotic system is presented, which aims to facilitate activities of daily living. The results of this work provide valuable insights into the feasibility of integrating the developed gaze-contingent framework into clinical practice without significant workflow disruptions.Open Acces

    A Model-Based Approach for Gesture Interfaces

    Get PDF
    The description of a gesture requires temporal analysis of values generated by input sensors, and it does not fit well the observer pattern traditionally used by frameworks to handle the user’s input. The current solution is to embed particular gesture-based interactions into frameworks by notifying when a gesture is detected completely. This approach suffers from a lack of flexibility, unless the programmer performs explicit temporal analysis of raw sensors data. This thesis proposes a compositional, declarative meta-model for gestures definition based on Petri Nets. Basic traits are used as building blocks for defining gestures; each one notifies the change of a feature value. A complex gesture is defined by the composition of other sub-gestures using a set of operators. The user interface behaviour can be associated to the recognition of the whole gesture or to any other sub-component, addressing the problem of granularity for the notification of events. The meta-model can be instantiated for different gesture recognition supports and its definition has been validated through a proof of concept library. Sample applications have been developed for supporting multi-touch gestures in iOS and full body gestures with Microsoft Kinect. In addition to the solution for the event granularity problem, this thesis discusses how to separate the definition of the gesture from the user interface behaviour using the proposed compositional approach. The gesture description meta-model has been integrated into MARIA, a model-based user interface description language, extending it with the description of full-body gesture interfaces

    Abstraction, Visualization, and Evolution of Process Models

    Get PDF
    The increasing adoption of process orientation in companies and organizations has resulted in large process model collections. Each process model of such a collection may comprise dozens or hundreds of elements and captures various perspectives of a business process, i.e., organizational, functional, control, resource, or data perspective. Domain experts having only limited process modeling knowledge, however, hardly comprehend such large and complex process models. Therefore, they demand for a customized (i.e., personalized) view on business processes enabling them to optimize and evolve process models effectively. This thesis contributes the proView framework to systematically create and update process views (i.e., abstractions) on process models and business processes respectively. More precisely, process views abstract large process models by hiding or combining process information. As a result, they provide an abstracted, but personalized representation of process information to domain experts. In particular, updates of a process view are supported, which are then propagated to the related process model as well as associated process views. Thereby, up-to-dateness and consistency of all process views defined on any process model can be always ensured. Finally, proView preserves the behaviour and correctness of a process model. Process abstractions realized by views are still not sufficient to assist domain experts in comprehending and evolving process models. Thus, additional process visualizations are introduced that provide text-based, form-based, and hierarchical representations of process models. Particularly, these process visualizations allow for view-based process abstractions and updates as well. Finally, process interaction concepts are introduced enabling domain experts to create and evolve process models on touch-enabled devices. This facilitates the documentation of process models in workshops or while interviewing process participants at their workplace. Altogether, proView enables domain experts to interact with large and complex process models as well as to evolve them over time, based on process model abstractions, additional process visualizations, and process interaction concepts. The framework is implemented in a proof-ofconcept prototype and validated through experiments and case studies

    Recent Advancements in Augmented Reality for Robotic Applications: A Survey

    Get PDF
    Robots are expanding from industrial applications to daily life, in areas such as medical robotics, rehabilitative robotics, social robotics, and mobile/aerial robotics systems. In recent years, augmented reality (AR) has been integrated into many robotic applications, including medical, industrial, human–robot interactions, and collaboration scenarios. In this work, AR for both medical and industrial robot applications is reviewed and summarized. For medical robot applications, we investigated the integration of AR in (1) preoperative and surgical task planning; (2) image-guided robotic surgery; (3) surgical training and simulation; and (4) telesurgery. AR for industrial scenarios is reviewed in (1) human–robot interactions and collaborations; (2) path planning and task allocation; (3) training and simulation; and (4) teleoperation control/assistance. In addition, the limitations and challenges are discussed. Overall, this article serves as a valuable resource for working in the field of AR and robotic research, offering insights into the recent state of the art and prospects for improvement

    Visualization and Interaction Technologies in Serious and Exergames for Cognitive Assessment and Training: A Survey on Available Solutions and Their Validation

    Get PDF
    Exergames and serious games, based on standard personal computers, mobile devices and gaming consoles or on novel immersive Virtual and Augmented Reality techniques, have become popular in the last few years and are now applied in various research fields, among which cognitive assessment and training of heterogeneous target populations. Moreover, the adoption of Web based solutions together with the integration of Artificial Intelligence and Machine Learning algorithms could bring countless advantages, both for the patients and the clinical personnel, as allowing the early detection of some pathological conditions, improving the efficacy and adherence to rehabilitation processes, through the personalisation of training sessions, and optimizing the allocation of resources by the healthcare system. The current work proposes a systematic survey of existing solutions in the field of cognitive assessment and training. We evaluate the visualization and interaction technologies commonly adopted and the measures taken to fulfil the need of the pathological target populations. Moreover, we analyze how implemented solutions are validated, i.e. The chosen experimental designs, data collection and analysis. Finally, we consider the availability of the applications and raw data to the large community of researchers and medical professionals and the actual application of proposed solutions in the standard clinical practice. Despite the potential of these technologies, research is still at an early stage. Although the recent release of accessible immersive virtual reality headsets and the increasing interest on vision-based techniques for tracking body and hands movements, many studies still rely on non-immersive virtual reality (67.2%), mainly mobile and personal computers, and standard gaming tools for interactions (41.5%). Finally, we highlight that although the interest of research community in this field is increasingly higher, the sharing of dataset (10.6%) and implemented applications (3.8%) should be promoted and the number of healthcare structures which have successfully introduced the new technological approaches in the treatment of their host patients is limited (10.2%)
    • …
    corecore