9 research outputs found

    Investigating Clutching Interactions for Touchless Medical Imaging Systems

    Get PDF
    Touchless input could transform clinical activity by allowing health professionals direct control over medical imaging systems in a sterile manner. Currently, users face the issues of being unable to directly manipulate imaging in aseptic environments, as well as needing to touch shared surfaces in other hospital areas. Unintended input is a key challenge for touchless interaction and could be especially disruptive in medical contexts. We evaluated four clutching techniques with 34 health professionals, measuring interaction performance and interviewing them to obtain insight into their views on clutching, and touchless control of medical imaging. As well as exploring the performance of the different clutching techniques, our analysis revealed an appetite for reliable touchless interfaces, a strong desire to reduce shared surface contact, and suggested potential improvements such as combined authentication and touchless control. Our findings can inform the development of novel touchless medical systems and identify challenges for future research

    Improving command selection in smart environments by exploiting spatial constancy

    Get PDF
    With the a steadily increasing number of digital devices, our environments are becoming increasingly smarter: we can now use our tablets to control our TV, access our recipe database while cooking, and remotely turn lights on and off. Currently, this Human-Environment Interaction (HEI) is limited to in-place interfaces, where people have to walk up to a mounted set of switches and buttons, and navigation-based interaction, where people have to navigate on-screen menus, for example on a smart-phone, tablet, or TV screen. Unfortunately, there are numerous scenarios in which neither of these two interaction paradigms provide fast and convenient access to digital artifacts and system commands. People, for example, might not want to touch an interaction device because their hands are dirty from cooking: they want device-free interaction. Or people might not want to have to look at a screen because it would interrupt their current task: they want system-feedback-free interaction. Currently, there is no interaction paradigm for smart environments that allows people for these kinds of interactions. In my dissertation, I introduce Room-based Interaction to solve this problem of HEI. With room-based interaction, people associate digital artifacts and system commands with real-world objects in the environment and point toward these real-world proxy objects for selecting the associated digital artifact. The design of room-based interaction is informed by a theoretical analysis of navigation- and pointing-based selection techniques, where I investigated the cognitive systems involved in executing a selection. An evaluation of room-based interaction in three user studies and a comparison with existing HEI techniques revealed that room-based interaction solves many shortcomings of existing HEI techniques: the use of real-world proxy objects makes it easy for people to learn the interaction technique and to perform accurate pointing gestures, and it allows for system-feedback-free interaction; the use of the environment as flat input space makes selections fast; the use of mid-air full-arm pointing gestures allows for device-free interaction and increases awareness of other’s interactions with the environment. Overall, I present an alternative selection paradigm for smart environments that is superior to existing techniques in many common HEI-scenarios. This new paradigm can make HEI more user-friendly, broaden the use cases of smart environments, and increase their acceptance for the average user
    corecore