28,916 research outputs found

    Exploring individual user differences in the 2D/3D interaction with medical image data

    Get PDF
    User-centered design is often performed without regard to individual user differences. In this paper, we report results of an empirical study aimed to evaluate whether computer experience and demographic user characteristics would have an effect on the way people interact with the visualized medical data in a 3D virtual environment using 2D and 3D input devices. We analyzed the interaction through performance data, questionnaires and observations. The results suggest that differences in gender, age and game experience have an effect on people’s behavior and task performance, as well as on subjective\ud user preferences

    A new mini-navigation tool allows accurate component placement during anterior total hip arthroplasty.

    Get PDF
    Introduction: Computer-assisted navigation systems have been explored in total hip arthroplasty (THA) to improve component positioning. While these systems traditionally rely on anterior pelvic plane registration, variances in soft tissue thickness overlying anatomical landmarks can lead to registration error, and the supine coronal plane has instead been proposed. The purpose of this study was to evaluate the accuracy of a novel navigation tool, using registration of the anterior pelvic plane or supine coronal plane during simulated anterior THA. Methods: Measurements regarding the acetabular component position, and changes in leg length and offset were recorded. Benchtop phantoms and target measurement values commonly seen in surgery were used for analysis. Measurements for anteversion and inclination, and changes in leg length and offset were recorded by the navigation tool and compared with the known target value of the simulation. Pearson\u27s Results: The device accurately measured cup position and leg length measurements to within 1° and 1 mm of the known target values, respectively. Across all simulations, there was a strong, positive relationship between values obtained by the device and the known target values ( Conclusion: The preliminary findings of this study suggest that the novel navigation tool tested is a potentially viable tool to improve the accuracy of component placement during THA using the anterior approach

    Computer aided inspection procedures to support smart manufacturing of injection moulded components

    Get PDF
    This work presents Reverse Engineering and Computer Aided technologies to improve the inspection of injection moulded electro-mechanical parts. Through a strong integration and automation of these methods, tolerance analysis, acquisition tool-path optimization and data management are performed. The core of the procedure concerns the automation of the data measure originally developed through voxel-based segmentation. This paper discusses the overall framework and its integration made according to Smart Manufacturing requirements. The experimental set-up, now in operative conditions at ABB SACE, is composed of a laser scanner installed on a CMM machine able to measure components with lengths in the range of 5÷250 mm, (b) a tool path optimization procedure and (c) a data management both developed as CAD-based applications

    In-home and remote use of robotic body surrogates by people with profound motor deficits

    Get PDF
    By controlling robots comparable to the human body, people with profound motor deficits could potentially perform a variety of physical tasks for themselves, improving their quality of life. The extent to which this is achievable has been unclear due to the lack of suitable interfaces by which to control robotic body surrogates and a dearth of studies involving substantial numbers of people with profound motor deficits. We developed a novel, web-based augmented reality interface that enables people with profound motor deficits to remotely control a PR2 mobile manipulator from Willow Garage, which is a human-scale, wheeled robot with two arms. We then conducted two studies to investigate the use of robotic body surrogates. In the first study, 15 novice users with profound motor deficits from across the United States controlled a PR2 in Atlanta, GA to perform a modified Action Research Arm Test (ARAT) and a simulated self-care task. Participants achieved clinically meaningful improvements on the ARAT and 12 of 15 participants (80%) successfully completed the simulated self-care task. Participants agreed that the robotic system was easy to use, was useful, and would provide a meaningful improvement in their lives. In the second study, one expert user with profound motor deficits had free use of a PR2 in his home for seven days. He performed a variety of self-care and household tasks, and also used the robot in novel ways. Taking both studies together, our results suggest that people with profound motor deficits can improve their quality of life using robotic body surrogates, and that they can gain benefit with only low-level robot autonomy and without invasive interfaces. However, methods to reduce the rate of errors and increase operational speed merit further investigation.Comment: 43 Pages, 13 Figure

    Integration of LIDAR and IFSAR for mapping

    Get PDF
    LiDAR and IfSAR data is now widely used for a number of applications, particularly those needing a digital elevation model. The data is often complementary to other data such as aerial imagery and high resolution satellite data. This paper will review the current data sources and the products and then look at the ways in which the data can be integrated for particular applications. The main platforms for LiDAR are either helicopter or fixed wing aircraft, often operating at low altitudes, a digital camera is frequently included on the platform, there is an interest in using other sensors such as 3 line cameras of hyperspectral scanners. IfSAR is used from satellite platforms, or from aircraft, the latter are more compatible with LiDAR for integration. The paper will examine the advantages and disadvantages of LiDAR and IfSAR for DEM generation and discuss the issues which still need to be dealt with. Examples of applications will be given and particularly those involving the integration of different types of data. Examples will be given from various sources and future trends examined

    EXPLORING THE ABILITY TO EMPLOY VIRTUAL 3D ENTITIES OUTDOORS AT RANGES BEYOND 20 METERS

    Get PDF
    The Army is procuring the Integrated Visual Augmentation System (IVAS) system to enable enhanced night vision, planning, and training capability. One known limitation of the IVAS system is the limited ability to portray virtual entities at far ranges in the outdoors due to light wash out, accurate positioning, and dynamic occlusion. The primary goal of this research was to evaluate fixed three-dimensional (3D) visualizations to support outdoor training for fire teams through squads, requiring target visualizations for 3D non-player characters or vehicles at ranges up to 300 m. Tools employed to achieve outdoor visualizations included GPS locational data with virtual entity placement, and sensors to adjust device light levels. This study was conducted with 20 military test subjects in three scenarios at the Naval Postgraduate School using a HoloLens II. Outdoor location considerations included shadows, background clutter, cars blocking the field of view, and the sun’s positioning. Users provided feedback on identifying the type of object, and the difficulty in finding the object. The results indicate GPS only aided in identification for objects up to 100 m. Animation had a statistically insignificant effect on identification of objects. Employment of software to adjust the light levels of the virtual objects aided in identification of objects at 200 m. This research develops a clearer understanding of requirements to enable the employment of mixed reality in outdoor training.Lieutenant Colonel, United States ArmyApproved for public release. Distribution is unlimited

    Ono: an open platform for social robotics

    Get PDF
    In recent times, the focal point of research in robotics has shifted from industrial ro- bots toward robots that interact with humans in an intuitive and safe manner. This evolution has resulted in the subfield of social robotics, which pertains to robots that function in a human environment and that can communicate with humans in an int- uitive way, e.g. with facial expressions. Social robots have the potential to impact many different aspects of our lives, but one particularly promising application is the use of robots in therapy, such as the treatment of children with autism. Unfortunately, many of the existing social robots are neither suited for practical use in therapy nor for large scale studies, mainly because they are expensive, one-of-a-kind robots that are hard to modify to suit a specific need. We created Ono, a social robotics platform, to tackle these issues. Ono is composed entirely from off-the-shelf components and cheap materials, and can be built at a local FabLab at the fraction of the cost of other robots. Ono is also entirely open source and the modular design further encourages modification and reuse of parts of the platform
    corecore