4,719 research outputs found

    VRContour: Bringing Contour Delineations of Medical Structures Into Virtual Reality

    Full text link
    Contouring is an indispensable step in Radiotherapy (RT) treatment planning. However, today's contouring software is constrained to only work with a 2D display, which is less intuitive and requires high task loads. Virtual Reality (VR) has shown great potential in various specialties of healthcare and health sciences education due to the unique advantages of intuitive and natural interactions in immersive spaces. VR-based radiation oncology integration has also been advocated as a target healthcare application, allowing providers to directly interact with 3D medical structures. We present VRContour and investigate how to effectively bring contouring for radiation oncology into VR. Through an autobiographical iterative design, we defined three design spaces focused on contouring in VR with the support of a tracked tablet and VR stylus, and investigating dimensionality for information consumption and input (either 2D or 2D + 3D). Through a within-subject study (n = 8), we found that visualizations of 3D medical structures significantly increase precision, and reduce mental load, frustration, as well as overall contouring effort. Participants also agreed with the benefits of using such metaphors for learning purposes.Comment: C. Chen, M. Yarmand, V. Singh, M.V. Sherer, J.D. Murphy, Y. Zhang and N. Weibel, "VRContour: Bringing Contour Delineations of Medical Structures Into Virtual Reality", 2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2022, pp. 1-10, doi: 10.1109/ISMAR55827.2022.0002

    Breaking the Screen: Interaction Across Touchscreen Boundaries in Virtual Reality for Mobile Knowledge Workers.

    Get PDF
    Virtual Reality (VR) has the potential to transform knowledge work. One advantage of VR knowledge work is that it allows extending 2D displays into the third dimension, enabling new operations, such as selecting overlapping objects or displaying additional layers of information. On the other hand, mobile knowledge workers often work on established mobile devices, such as tablets, limiting interaction with those devices to a small input space. This challenge of a constrained input space is intensified in situations when VR knowledge work is situated in cramped environments, such as airplanes and touchdown spaces. In this paper, we investigate the feasibility of interacting jointly between an immersive VR head-mounted display and a tablet within the context of knowledge work. Specifically, we 1) design, implement and study how to interact with information that reaches beyond a single physical touchscreen in VR; 2) design and evaluate a set of interaction concepts; and 3) build example applications and gather user feedback on those applications.Comment: 10 pages, 8 figures, ISMAR 202

    ALT-C 2010 - Conference Introduction and Abstracts

    Get PDF

    Art and Medicine: A Collaborative Project Between Virginia Commonwealth University in Qatar and Weill Cornell Medicine in Qatar

    Get PDF
    Four faculty researchers, two from Virginia Commonwealth University in Qatar, and two from Weill Cornell Medicine in Qatar developed a one semester workshop-based course in Qatar exploring the connections between art and medicine in a contemporary context. Students (6 art / 6 medicine) were enrolled in the course. The course included presentations by clinicians, medical engineers, artists, computing engineers, an art historian, a graphic designer, a painter, and other experts from the fields of art, design, and medicine. To measure the student experience of interdisciplinarity, the faculty researchers employed a mixed methods approach involving psychometric tests and observational ethnography. Data instruments included pre- and post-course semi-structured audio interviews, pre-test / post-test psychometric instruments (Budner Scale and Torrance Tests of Creativity), observational field notes, self-reflective blogging, and videography. This book describes the course and the experience of the students. It also contains images of the interdisciplinary work they created for a culminating class exhibition. Finally, the book provides insight on how different fields in a Middle Eastern context can share critical /analytical thinking tools to refine their own professional practices

    HUMAN CENTERED DESIGN APPLIED TO PERCEPTUAL PARADIGMS

    Get PDF
    This thesis gives three examples of projects that apply knowledge from areas such as human centered design, computer science, and psychology to study sensation and perception. All three of these projects were created to gather information on how humans interact with their surrounding environment and the world. For instance the first area of discovery included the way humans interact within their perceptual and personal space through an interactive table. The second project looks at exploring the neural mechanisms that affect Haptic Hallucinations by creating a device that can give the feeling of bugs crawling on or below the surface of the skin. The final study is an experiment, which looks to study tactile spatial acuity through laser cut stimuli and recording movements of exploration

    Assistive technology design and development for acceptable robotics companions for ageing years

    Get PDF
    © 2013 Farshid Amirabdollahian et al., licensee Versita Sp. z o. o. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs license, which means that the text may be used for non-commercial purposes, provided credit is given to the author.A new stream of research and development responds to changes in life expectancy across the world. It includes technologies which enhance well-being of individuals, specifically for older people. The ACCOMPANY project focuses on home companion technologies and issues surrounding technology development for assistive purposes. The project responds to some overlooked aspects of technology design, divided into multiple areas such as empathic and social human-robot interaction, robot learning and memory visualisation, and monitoring persons’ activities at home. To bring these aspects together, a dedicated task is identified to ensure technological integration of these multiple approaches on an existing robotic platform, Care-O-Bot®3 in the context of a smart-home environment utilising a multitude of sensor arrays. Formative and summative evaluation cycles are then used to assess the emerging prototype towards identifying acceptable behaviours and roles for the robot, for example role as a butler or a trainer, while also comparing user requirements to achieved progress. In a novel approach, the project considers ethical concerns and by highlighting principles such as autonomy, independence, enablement, safety and privacy, it embarks on providing a discussion medium where user views on these principles and the existing tension between some of these principles, for example tension between privacy and autonomy over safety, can be captured and considered in design cycles and throughout project developmentsPeer reviewe

    Videos in Context for Telecommunication and Spatial Browsing

    Get PDF
    The research presented in this thesis explores the use of videos embedded in panoramic imagery to transmit spatial and temporal information describing remote environments and their dynamics. Virtual environments (VEs) through which users can explore remote locations are rapidly emerging as a popular medium of presence and remote collaboration. However, capturing visual representation of locations to be used in VEs is usually a tedious process that requires either manual modelling of environments or the employment of specific hardware. Capturing environment dynamics is not straightforward either, and it is usually performed through specific tracking hardware. Similarly, browsing large unstructured video-collections with available tools is difficult, as the abundance of spatial and temporal information makes them hard to comprehend. At the same time, on a spectrum between 3D VEs and 2D images, panoramas lie in between, as they offer the same 2D images accessibility while preserving 3D virtual environments surrounding representation. For this reason, panoramas are an attractive basis for videoconferencing and browsing tools as they can relate several videos temporally and spatially. This research explores methods to acquire, fuse, render and stream data coming from heterogeneous cameras, with the help of panoramic imagery. Three distinct but interrelated questions are addressed. First, the thesis considers how spatially localised video can be used to increase the spatial information transmitted during video mediated communication, and if this improves quality of communication. Second, the research asks whether videos in panoramic context can be used to convey spatial and temporal information of a remote place and the dynamics within, and if this improves users' performance in tasks that require spatio-temporal thinking. Finally, the thesis considers whether there is an impact of display type on reasoning about events within videos in panoramic context. These research questions were investigated over three experiments, covering scenarios common to computer-supported cooperative work and video browsing. To support the investigation, two distinct video+context systems were developed. The first telecommunication experiment compared our videos in context interface with fully-panoramic video and conventional webcam video conferencing in an object placement scenario. The second experiment investigated the impact of videos in panoramic context on quality of spatio-temporal thinking during localization tasks. To support the experiment, a novel interface to video-collection in panoramic context was developed and compared with common video-browsing tools. The final experimental study investigated the impact of display type on reasoning about events. The study explored three adaptations of our video-collection interface to three display types. The overall conclusion is that videos in panoramic context offer a valid solution to spatio-temporal exploration of remote locations. Our approach presents a richer visual representation in terms of space and time than standard tools, showing that providing panoramic contexts to video collections makes spatio-temporal tasks easier. To this end, videos in context are suitable alternative to more difficult, and often expensive solutions. These findings are beneficial to many applications, including teleconferencing, virtual tourism and remote assistance

    An interactive interface for nursing robots.

    Get PDF
    Physical Human-Robot Interaction (pHRI) is inevitable for a human user while working with assistive robots. There are various aspects of pHRI, such as choosing the interface, type of control schemes implemented and the modes of interaction. The research work presented in this thesis concentrates on a health-care assistive robot called Adaptive Robot Nursing Assistant (ARNA). An assistive robot in a health-care environment has to be able to perform routine tasks and be aware of the surrounding environment at the same time. In order to operate the robot, a teleoperation based interaction would be tedious for some patients as it would require a high level of concentration and can cause cognitive fatigue. It would also require a learning curve for the user in order to teleoperate the robot efficiently. The research work involves the development of a proposed Human-Machine Interface (HMI) framework which integrates the decision-making module, interaction module, and a tablet interface module. The HMI framework integrates a traded control based interaction which allows the robot to take decisions on planning and executing a task while the user only has to specify the task through a tablet interface. According to the preliminary experiments conducted as a part of this thesis, the traded control based approach allows a novice user to operate the robot with the same efficiency as an expert user. Past researchers have shown that during a conversation with a speech interface, a user would feel disengaged if the answers received from the interface are not in the context of the conversation. The research work in this thesis explores the different possibilities of implementing a speech interface that would be able to reply to any conversational queries from the user. A speech interface was developed by creating a semantic space out of Wikipedia database using Latent Semantic Analysis (LSA). This allowed the speech interface to have a wide knowledge-base and be able to maintain a conversation in the same context as intended by the user. This interface was developed as a web-service and was deployed on two different robots to exhibit its portability and the ease of implementation with any other robot. In the work presented, a tablet application was developed which integrates speech interface and an onscreen button interface to execute tasks through ARNA robot. This tablet interface application can access video feed and sensor data from robots, assist the user with decision making during pick and place operations, monitor the user health over time, and provide conversational dialogue during sitting sessions. In this thesis, we present the software and hardware framework that enable a patient sitter HMI, and together with experimental results with a small number of users that demonstrate that the concept is sound and scalable
    corecore