44 research outputs found

    From head to toe:body movement for human-computer interaction

    Get PDF
    Our bodies are the medium through which we experience the world around us, so human-computer interaction can highly benefit from the richness of body movements and postures as an input modality. In recent years, the widespread availability of inertial measurement units and depth sensors led to the development of a plethora of applications for the body in human-computer interaction. However, the main focus of these works has been on using the upper body for explicit input. This thesis investigates the research space of full-body human-computer interaction through three propositions. The first proposition is that there is more to be inferred by natural users’ movements and postures, such as the quality of activities and psychological states. We develop this proposition in two domains. First, we explore how to support users in performing weight lifting activities. We propose a system that classifies different ways of performing the same activity; an object-oriented model-based framework for formally specifying activities; and a system that automatically extracts an activity model by demonstration. Second, we explore how to automatically capture nonverbal cues for affective computing. We developed a system that annotates motion and gaze data according to the Body Action and Posture coding system. We show that quality analysis can add another layer of information to activity recognition, and that systems that support the communication of quality information should strive to support how we implicitly communicate movement through nonverbal communication. Further, we argue that working at a higher level of abstraction, affect recognition systems can more directly translate findings from other areas into their algorithms, but also contribute new knowledge to these fields. The second proposition is that the lower limbs can provide an effective means of interacting with computers beyond assistive technology To address the problem of the dispersed literature on the topic, we conducted a comprehensive survey on the lower body in HCI, under the lenses of users, systems and interactions. To address the lack of a fundamental understanding of foot-based interactions, we conducted a series of studies that quantitatively characterises several aspects of foot-based interaction, including Fitts’s Law performance models, the effects of movement direction, foot dominance and visual feedback, and the overhead incurred by using the feet together with the hand. To enable all these studies, we developed a foot tracker based on a Kinect mounted under the desk. We show that the lower body can be used as a valuable complementary modality for computing input. Our third proposition is that by treating body movements as multiple modalities, rather than a single one, we can enable novel user experiences. We develop this proposition in the domain of 3D user interfaces, as it requires input with multiple degrees of freedom and offers a rich set of complex tasks. We propose an approach for tracking the whole body up close, by splitting the sensing of different body parts across multiple sensors. Our setup allows tracking gaze, head, mid-air gestures, multi-touch gestures, and foot movements. We investigate specific applications for multimodal combinations in the domain of 3DUI, specifically how gaze and mid-air gestures can be combined to improve selection and manipulation tasks; how the feet can support the canonical 3DUI tasks; and how a multimodal sensing platform can inspire new 3D game mechanics. We show that the combination of multiple modalities can lead to enhanced task performance, that offloading certain tasks to alternative modalities not only frees the hands, but also allows simultaneous control of multiple degrees of freedom, and that by sensing different modalities separately, we achieve a more detailed and precise full body tracking

    AUGMENTED REALITY AND INTRAOPERATIVE C-ARM CONE-BEAM COMPUTED TOMOGRAPHY FOR IMAGE-GUIDED ROBOTIC SURGERY

    Get PDF
    Minimally-invasive robotic-assisted surgery is a rapidly-growing alternative to traditionally open and laparoscopic procedures; nevertheless, challenges remain. Standard of care derives surgical strategies from preoperative volumetric data (i.e., computed tomography (CT) and magnetic resonance (MR) images) that benefit from the ability of multiple modalities to delineate different anatomical boundaries. However, preoperative images may not reflect a possibly highly deformed perioperative setup or intraoperative deformation. Additionally, in current clinical practice, the correspondence of preoperative plans to the surgical scene is conducted as a mental exercise; thus, the accuracy of this practice is highly dependent on the surgeon’s experience and therefore subject to inconsistencies. In order to address these fundamental limitations in minimally-invasive robotic surgery, this dissertation combines a high-end robotic C-arm imaging system and a modern robotic surgical platform as an integrated intraoperative image-guided system. We performed deformable registration of preoperative plans to a perioperative cone-beam computed tomography (CBCT), acquired after the patient is positioned for intervention. From the registered surgical plans, we overlaid critical information onto the primary intraoperative visual source, the robotic endoscope, by using augmented reality. Guidance afforded by this system not only uses augmented reality to fuse virtual medical information, but also provides tool localization and other dynamic intraoperative updated behavior in order to present enhanced depth feedback and information to the surgeon. These techniques in guided robotic surgery required a streamlined approach to creating intuitive and effective human-machine interferences, especially in visualization. Our software design principles create an inherently information-driven modular architecture incorporating robotics and intraoperative imaging through augmented reality. The system's performance is evaluated using phantoms and preclinical in-vivo experiments for multiple applications, including transoral robotic surgery, robot-assisted thoracic interventions, and cocheostomy for cochlear implantation. The resulting functionality, proposed architecture, and implemented methodologies can be further generalized to other C-arm-based image guidance for additional extensions in robotic surgery

    Virtual and Mixed Reality Support for Activities of Daily Living

    Get PDF
    Rehabilitation and training are extremely important process that help people who have suffered some form of trauma to regain their ability to live independently and successfully complete activities of daily living. VR and MR have been used in rehabilitation and training, with examples in a range of areas such as physical and cognitive rehabilitation, and medical training. However, previous research has mainly used non-immersive VR such as using video games on a computer monitor or television. Immersive VR Head-Mounted Displays were first developed in 1965 but the devices were usually large, bulky and expensive. In 2016, the release of low-cost VR HMDs allowed for wider adoption of VR technology. This thesis investigates the impact of these devices in supporting activities of daily living through three novel applications: training driving skills for a powered wheelchair in both VR and MR; and using VR to help with the cognitive rehabilitation of stroke patients. Results from the acceptability study for VR in cognitive rehabilitation showed that patients would be likely to accept VR as a method of rehabilitation. However, factors such as visual issues need to be taken into consideration. The validation study for the Wheelchair-VR project showed promising results in terms of user improvement after the VR training session but the majority of the users experienced symptoms of cybersickness. Wheelchair-MR didn’t show statistically significant results in terms of improvements but did show a mean average improvement compared to the control group. The effects of cybersickness were also greatly reduced compared to VR. We conclude that VR and MR can be used in conjunction with modern games engines to develop virtual environments that can be adapted to accelerate the rehabilitation and training of patients coping with different aspects of daily life

    Coping with the inheritance of COVID-19: the role of new interactive technologies to enhance user experience in different contexts of use

    Get PDF
    The COVID-19 pandemic has upset the habits of people and various sectors of society, including training, entertainment, and retail. These sectors have been forced to adapt to abnormal situations such as social distancing, remote work, and online entertainment. The pandemic has significantly transformed the training field, leading to the closure of many in-person instruction centers and a shift toward online education courses, which can be less effective. In addition, the entertainment industry has been heavily transformed by social distancing, resulting in the cancellation of many live events and the closure of several cinemas. This has increased demand for online entertainment options, such as streaming services and virtual events. Finally, the restrictions imposed by the COVID-19 pandemic substantially impacted physical stores and fairs, suspending exhibitions for more than two years. This has further driven consumers to rely on e-commerce to fulfill their purchasing and companies to increasingly take advantage of new technologies such as augmented reality. In this suddenly disrupted scenario, new technologies have the potential to fill the gap generated by the pandemic, functioning as an interactive bridge to connect people. This Ph.D. thesis explored the potential of interactive technologies in mitigating the challenges posed by the COVID-19 pandemic in various contexts of use in the above-mentioned areas. Specifically, three lines of research were investigated by conducting different studies using a mixed approach in the Human-Computer Interaction field. The first research line focused on the study of immersive virtual reality training, with a particular interest in flood emergencies, a growing phenomenon. The goal was to implement engaging and efficient training for citizens that live near rivers through a human-centric design approach. The second line of research explored innovative ways to improve social interaction and collaboration in the entertainment sector, highlighting guidelines for the design of shared streaming experiences. In particular, three different communication modalities were studied during group viewing of an interactive film on a streaming platform. Finally, the third research line focused on the retail sector. On the one hand, the focus consisted of understanding which aspects of the 3D web and AR technology are helpful for supporting small businesses and trade fairs. On the other hand, the focus was to investigate how to support consumers during an AR shopping experience when interacting with 3D virtual products of different sizes. Overall, this project provides suggestions and guidelines for designing systems that can both increasingly connect people at a distance and offer new hybrid worlds. In addition, this project expands state-of-the-art related to interactive technologies and offers generalizable results outside the crisis created by COVID-19. These technologies, now increasingly integrated into everyday life, can be a tool for empowerment and resilience, improving people's lives.The COVID-19 pandemic has upset the habits of people and various sectors of society, including training, entertainment, and retail. These sectors have been forced to adapt to abnormal situations such as social distancing, remote work, and online entertainment. The pandemic has significantly transformed the training field, leading to the closure of many in-person instruction centers and a shift toward online education courses, which can be less effective. In addition, the entertainment industry has been heavily transformed by social distancing, resulting in the cancellation of many live events and the closure of several cinemas. This has increased demand for online entertainment options, such as streaming services and virtual events. Finally, the restrictions imposed by the COVID-19 pandemic substantially impacted physical stores and fairs, suspending exhibitions for more than two years. This has further driven consumers to rely on e-commerce to fulfill their purchasing and companies to increasingly take advantage of new technologies such as augmented reality. In this suddenly disrupted scenario, new technologies have the potential to fill the gap generated by the pandemic, functioning as an interactive bridge to connect people. This Ph.D. thesis explored the potential of interactive technologies in mitigating the challenges posed by the COVID-19 pandemic in various contexts of use in the above-mentioned areas. Specifically, three lines of research were investigated by conducting different studies using a mixed approach in the Human-Computer Interaction field. The first research line focused on the study of immersive virtual reality training, with a particular interest in flood emergencies, a growing phenomenon. The goal was to implement engaging and efficient training for citizens that live near rivers through a human-centric design approach. The second line of research explored innovative ways to improve social interaction and collaboration in the entertainment sector, highlighting guidelines for the design of shared streaming experiences. In particular, three different communication modalities were studied during group viewing of an interactive film on a streaming platform. Finally, the third research line focused on the retail sector. On the one hand, the focus consisted of understanding which aspects of the 3D web and AR technology are helpful for supporting small businesses and trade fairs. On the other hand, the focus was to investigate how to support consumers during an AR shopping experience when interacting with 3D virtual products of different sizes. Overall, this project provides suggestions and guidelines for designing systems that can both increasingly connect people at a distance and offer new hybrid worlds. In addition, this project expands state-of-the-art related to interactive technologies and offers generalizable results outside the crisis created by COVID-19. These technologies, now increasingly integrated into everyday life, can be a tool for empowerment and resilience, improving people's lives

    Robotic Assisted Fracture Surgery

    Get PDF
    corecore