2,458 research outputs found

    Medical 3D thermography system

    Get PDF
    Infrared (IR) thermography determines the surface temperature of an object or human body using thermal IR measurement camera. It is an imaging technology which is contactless and completely non-invasive. These propertiesmake IR thermography a useful method of analysis that is used in various industrial applications to detect, monitor and predict irregularities in many fields from engineering to medical and biological observations. This paper presents a conceptual model of Medical 3D Thermography which introduces standardised 3D thermogram creation, representation and analysis concepts useful for variety of medical applications. The creation of 3D thermograms is possible through combining 3D scanning methods with thermal imaging.We describe development of a 3D thermography system integrating passive thermal imaging with 3D geometrical data from active 3D scanner.We outline the potential benefits of this system in medical applications. In particular, we emphasize the benefits of using this system for preventive detection of breast cancer

    Towards Intelligent Telerobotics: Visualization and Control of Remote Robot

    Get PDF
    Human-machine cooperative or co-robotics has been recognized as the next generation of robotics. In contrast to current systems that use limited-reasoning strategies or address problems in narrow contexts, new co-robot systems will be characterized by their flexibility, resourcefulness, varied modeling or reasoning approaches, and use of real-world data in real time, demonstrating a level of intelligence and adaptability seen in humans and animals. The research I focused is in the two sub-field of co-robotics: teleoperation and telepresence. We firstly explore the ways of teleoperation using mixed reality techniques. I proposed a new type of display: hybrid-reality display (HRD) system, which utilizes commodity projection device to project captured video frame onto 3D replica of the actual target surface. It provides a direct alignment between the frame of reference for the human subject and that of the displayed image. The advantage of this approach lies in the fact that no wearing device needed for the users, providing minimal intrusiveness and accommodating users eyes during focusing. The field-of-view is also significantly increased. From a user-centered design standpoint, the HRD is motivated by teleoperation accidents, incidents, and user research in military reconnaissance etc. Teleoperation in these environments is compromised by the Keyhole Effect, which results from the limited field of view of reference. The technique contribution of the proposed HRD system is the multi-system calibration which mainly involves motion sensor, projector, cameras and robotic arm. Due to the purpose of the system, the accuracy of calibration should also be restricted within millimeter level. The followed up research of HRD is focused on high accuracy 3D reconstruction of the replica via commodity devices for better alignment of video frame. Conventional 3D scanner lacks either depth resolution or be very expensive. We proposed a structured light scanning based 3D sensing system with accuracy within 1 millimeter while robust to global illumination and surface reflection. Extensive user study prove the performance of our proposed algorithm. In order to compensate the unsynchronization between the local station and remote station due to latency introduced during data sensing and communication, 1-step-ahead predictive control algorithm is presented. The latency between human control and robot movement can be formulated as a linear equation group with a smooth coefficient ranging from 0 to 1. This predictive control algorithm can be further formulated by optimizing a cost function. We then explore the aspect of telepresence. Many hardware designs have been developed to allow a camera to be placed optically directly behind the screen. The purpose of such setups is to enable two-way video teleconferencing that maintains eye-contact. However, the image from the see-through camera usually exhibits a number of imaging artifacts such as low signal to noise ratio, incorrect color balance, and lost of details. Thus we develop a novel image enhancement framework that utilizes an auxiliary color+depth camera that is mounted on the side of the screen. By fusing the information from both cameras, we are able to significantly improve the quality of the see-through image. Experimental results have demonstrated that our fusion method compares favorably against traditional image enhancement/warping methods that uses only a single image

    deForm: An interactive malleable surface for capturing 2.5D arbitrary objects, tools and touch

    Get PDF
    We introduce a novel input device, deForm, that supports 2.5D touch gestures, tangible tools, and arbitrary objects through real-time structured light scanning of a malleable surface of interaction. DeForm captures high-resolution surface deformations and 2D grey-scale textures of a gel surface through a three-phase structured light 3D scanner. This technique can be combined with IR projection to allow for invisible capture, providing the opportunity for co-located visual feedback on the deformable surface. We describe methods for tracking fingers, whole hand gestures, and arbitrary tangible tools. We outline a method for physically encoding fiducial marker information in the height map of tangible tools. In addition, we describe a novel method for distinguishing between human touch and tangible tools, through capacitive sensing on top of the input surface. Finally we motivate our device through a number of sample applications

    A comparative study of the sense of presence and anxiety in an invisible marker versus a marker Augmented Reality system for the treatment of phobia towards small animals

    Full text link
    Phobia towards small animals has been treated using exposure in vivo and virtual reality. Recently, augmented reality (AR) has also been presented as a suitable tool. The first AR system developed for this purpose used visible markers for tracking. In this first system, the presence of visible markers warns the user of the appearance of animals. To avoid this warning, this paper presents a second version in which the markers are invisible. First, the technical characteristics of a prototype are described. Second, a comparative study of the sense of presence and anxiety in a non-phobic population using the visible marker-tracking system and the invisible marker-tracking system is presented. Twenty-four participants used the two systems. The participants were asked to rate their anxiety level (from 0 to 10) at 8 different moments. Immediately after their experience, the participants were given the SUS questionnaire to assess their subjective sense of presence. The results indicate that the invisible marker-tracking system induces a similar or higher sense of presence than the visible marker-tracking system, and it also provokes a similar or higher level of anxiety in important steps for therapy. Moreover, 83.33% of the participants reported that they did not have the same sensations/surprise using the two systems, and they scored the advantage of using the invisible marker-tracking system (IMARS) at 5.19 +/- 2.25 (on a scale from 1 to 10). However, if only the group with higher fear levels is considered, 100% of the participants reported that they did not have the same sensations/surprise with the two systems, scoring the advantage of using IMARS at 6.38 +/- 1.60 (on a scale from 1 to 10). (C) 2011 Elsevier Ltd. All rights reserved.Juan, M.; Joele, D. (2011). A comparative study of the sense of presence and anxiety in an invisible marker versus a marker Augmented Reality system for the treatment of phobia towards small animals. International Journal of Human-Computer Studies. 69(6):440-453. doi:10.1016/j.ijhcs.2011.03.00244045369

    Laser Pointer Tracking in Projector-Augmented Architectural Environments

    Get PDF
    We present a system that applies a custom-built pan-tilt-zoom camera for laser-pointer tracking in arbitrary real environments. Once placed in a building environment, it carries out a fully automatic self-registration, registrations of projectors, and sampling of surface parameters, such as geometry and reflectivity. After these steps, it can be used for tracking a laser spot on the surface as well as an LED marker in 3D space, using inter-playing fisheye context and controllable detail cameras. The captured surface information can be used for masking out areas that are critical to laser-pointer tracking, and for guiding geometric and radiometric image correction techniques that enable a projector-based augmentation on arbitrary surfaces. We describe a distributed software framework that couples laser-pointer tracking for interaction, projector-based AR as well as video see-through AR for visualizations with the domain specific functionality of existing desktop tools for architectural planning, simulation and building surveying

    Real Time Structured Light and Applications

    Get PDF

    Visualization and Analysis Tools for Neuronal Tissue

    Get PDF
    The complex nature of neuronal cellular and circuit structure poses challenges for understanding tissue organization. New techniques in electron microscopy allow for large datasets to be acquired from serial sections of neuronal tissue. These techniques reveal all cells in an unbiased fashion, so their segmentation produces complex structures that must be inspected and analyzed. Although several software packages provide 3D representations of these structures, they are limited to monoscopic projection, and are tailored to the visualization of generic 3D data. On the other hand, stereoscopic display has been shown to improve the immersive experience, with significant gains in understanding spatial relationships and identifying important features. To leverage those benefits, we have developed a 3D immersive virtual reality data display system that besides presenting data visually allows augmenting and interacting with them in a form that facilitates human analysis.;To achieve a useful system for neuroscientists, we have developed the BrainTrek system, which is a suite of software applications suited for the organization, rendering, visualization, and modification of neuron model scenes. A middle cost point CAVE system provides high vertex count rendering of an immersive 3D environment. A standard head- and wand-tracking allows movement control and modification of the scene via the on-screen, 3D menu, while a tablet touch screen provides multiple navigation modes and a 2D menu. Graphic optimization provides theoretically limitless volumes to be presented and an on-screen mini-map allows users to quickly orientate themselves. A custom voice note-taking mechanism has been installed, allowing scenes to be described and revisited. Finally, ray-casting support allows numerous analytical features, including 3D distance and volume measurements, computation and presentation of statistics, and point-and-click retrieval and presentation of raw electron microscopy data. The extension of this system to the Unity3D platform provides a low-cost alternative to the CAVE. This allows users to visualize, explore, and annotate 3D cellular data in multiple platforms and modalities, ranging from different operating systems, different hardware platforms (e.g., tablets, PCs, or stereo head-mounted displays), to operating in an online or off-line fashion. Such approach has the potential to not only address visualization and analysis needs of neuroscientists, but also to become a tool for educational purposes, as well as for crowdsourcing upcoming needs for sheer amounts of neuronal data annotation
    • …
    corecore