39 research outputs found

    Computer vision methods applied to person tracking and identification

    Get PDF
    2013 - 2014Computer vision methods for tracking and identification of people in constrained and unconstrained environments have been widely explored in the last decades. De- spite of the active research on these topics, they are still open problems for which standards and/or common guidelines have not been defined yet. Application fields of computer vision-based tracking systems are almost infinite. Nowadays, the Aug- mented Reality is a very active field of the research that can benefit from vision-based user’s tracking to work. Being defined as the fusion of real with virtual worlds, the success of an augmented reality application is completely dependant on the efficiency of the exploited tracking method. This work of thesis covers the issues related to tracking systems in augmented reality applications proposing a comprehensive and adaptable framework for marker-based tracking and a deep formal analysis. The provided analysis makes possible to objectively assess and quantify the advantages of using augmented reality principles in heterogeneous operative contexts. Two case studies have been considered, that are the support to maintenance in an industrial environment and to electrocardiography in a typical telemedicine scenario. Advan- tages and drawback are provided as well as future directions of the proposed study. The second topic covered in this thesis relates to the vision-based tracking solution for unconstrained outdoor environments. In video surveillance domain, a tracker is asked to handle variations in illumination, cope with appearance changes of the tracked objects and, possibly, predict motion to better anticipate future positions. ... [edited by Author]XIII n.s

    Computer Vision Measurements for Automated Microrobotic Paper Fiber Studies

    Get PDF
    The mechanical characterization of paper fibers and paper fiber bonds determines the key parameters affecting the mechanical properties of paper. Although bulk measurements from test sheets can give average values, they do not yield any real fiber-level data. The current, state-of-the-art methods for fiberlevel measurements are slow and laborious, requiring delicate manual handling of microscopic samples. There are commercial microrobotic actuators that allow automated or tele-operated manipulation of microscopic objects such as fibers, but it is challenging to acquire the data needed to guide such demanding manipulation. This thesis presents a solution to the illumination problem and computer vision algorithms for obtaining the required data. The solutions are designed for a microrobotic platform that comprises actuators for manipulating the fibers and one or two microscope cameras for visual feedback.The algorithms have been developed both for wet fibers, which can be treated as 2D objects, and for dry fibers and fiber bonds, which are treated as 3D objects. The major innovations in the algorithms are the rules for the micromanipulation of the curly fiber strands and the automated 3D measurements of microscale objects with random geometries. The solutions are validated by imaging and manipulation experiments with wet and dry paper fibers and dry paper fiber bonds. In the imaging experiments, the results are compared with the reference data obtained either from an experienced human or another imaging device. The results show that these solutions provide morphological data about the fibers which is accurate and precise enough to enable automated fiber manipulation. Although this thesis is focused on the manipulation of paper fibers and paper fiber bonds, both the illumination solution and the computer vision algorithms are applicable to other types of fibrous materials

    AUGMENTED REALITY AND INTRAOPERATIVE C-ARM CONE-BEAM COMPUTED TOMOGRAPHY FOR IMAGE-GUIDED ROBOTIC SURGERY

    Get PDF
    Minimally-invasive robotic-assisted surgery is a rapidly-growing alternative to traditionally open and laparoscopic procedures; nevertheless, challenges remain. Standard of care derives surgical strategies from preoperative volumetric data (i.e., computed tomography (CT) and magnetic resonance (MR) images) that benefit from the ability of multiple modalities to delineate different anatomical boundaries. However, preoperative images may not reflect a possibly highly deformed perioperative setup or intraoperative deformation. Additionally, in current clinical practice, the correspondence of preoperative plans to the surgical scene is conducted as a mental exercise; thus, the accuracy of this practice is highly dependent on the surgeon’s experience and therefore subject to inconsistencies. In order to address these fundamental limitations in minimally-invasive robotic surgery, this dissertation combines a high-end robotic C-arm imaging system and a modern robotic surgical platform as an integrated intraoperative image-guided system. We performed deformable registration of preoperative plans to a perioperative cone-beam computed tomography (CBCT), acquired after the patient is positioned for intervention. From the registered surgical plans, we overlaid critical information onto the primary intraoperative visual source, the robotic endoscope, by using augmented reality. Guidance afforded by this system not only uses augmented reality to fuse virtual medical information, but also provides tool localization and other dynamic intraoperative updated behavior in order to present enhanced depth feedback and information to the surgeon. These techniques in guided robotic surgery required a streamlined approach to creating intuitive and effective human-machine interferences, especially in visualization. Our software design principles create an inherently information-driven modular architecture incorporating robotics and intraoperative imaging through augmented reality. The system's performance is evaluated using phantoms and preclinical in-vivo experiments for multiple applications, including transoral robotic surgery, robot-assisted thoracic interventions, and cocheostomy for cochlear implantation. The resulting functionality, proposed architecture, and implemented methodologies can be further generalized to other C-arm-based image guidance for additional extensions in robotic surgery

    Confidence-Based Hybrid Tracking to Overcome Visual Tracking Failures in Calibration-Less Vision-Guided Micromanipulation

    No full text
    © 2004-2012 IEEE. This article proposes a confidence-based approach for combining two visual tracking techniques to minimize the influence of unforeseen visual tracking failures to achieve uninterrupted vision-based control. Despite research efforts in vision-guided micromanipulation, existing systems are not designed to overcome visual tracking failures, such as inconsistent illumination condition, regional occlusion, unknown structures, and nonhomogenous background scene. There remains a gap in expanding current procedures beyond the laboratory environment for practical deployment of vision-guided micromanipulation system. A hybrid tracking method, which combines motion-cue feature detection and score-based template matching, is incorporated in an uncalibrated vision-guided workflow capable of self-initializing and recovery during the micromanipulation. Weighted average, based on the respective confidence indices of the motion-cue feature localization and template-based trackers, is inferred from the statistical accuracy of feature locations and the similarity score-based template matches. Results suggest improvement of the tracking performance using hybrid tracking under the conditions. The mean errors of hybrid tracking are maintained at subpixel level under adverse experimental conditions while the original template matching approach has mean errors of 1.53, 1.73, and 2.08 pixels. The method is also demonstrated to be robust in the nonhomogeneous scene with an array of plant cells. By proposing a self-contained fusion method that overcomes unforeseen visual tracking failures using pure vision approach, we demonstrated the robustness in our developed low-cost micromanipulation platform. Note to Practitioners - Cell manipulation is traditionally done in highly specialized facilities and controlled environment. Existing vision-based methods do not readily fulfill the need for the unique requirements in cell manipulation including prospective plant cell-related applications. There is a need for robust visual tracking to overcome visual tracking failure during the automated vision-guided micromanipulation. To address the gap in maintaining continuous tracking for vision-guided micromanipulation under unforeseen visual tracking failures, we proposed a purely visual data-driven hybrid tracking approach. Our proposed confidence-based approach combines two tracking techniques to minimize the influence of scene uncertainties, hence, achieving uninterrupted vision-based control. Because of its readily deployable design, the method can be generalized for a wide range of vision-guided micromanipulation applications. This method has the potential to significantly expand the capability of cell manipulation technology to even include prospective applications associated with plant cells, which are yet to be explored

    Medical Robotics

    Get PDF
    The first generation of surgical robots are already being installed in a number of operating rooms around the world. Robotics is being introduced to medicine because it allows for unprecedented control and precision of surgical instruments in minimally invasive procedures. So far, robots have been used to position an endoscope, perform gallbladder surgery and correct gastroesophogeal reflux and heartburn. The ultimate goal of the robotic surgery field is to design a robot that can be used to perform closed-chest, beating-heart surgery. The use of robotics in surgery will expand over the next decades without any doubt. Minimally Invasive Surgery (MIS) is a revolutionary approach in surgery. In MIS, the operation is performed with instruments and viewing equipment inserted into the body through small incisions created by the surgeon, in contrast to open surgery with large incisions. This minimizes surgical trauma and damage to healthy tissue, resulting in shorter patient recovery time. The aim of this book is to provide an overview of the state-of-art, to present new ideas, original results and practical experiences in this expanding area. Nevertheless, many chapters in the book concern advanced research on this growing area. The book provides critical analysis of clinical trials, assessment of the benefits and risks of the application of these technologies. This book is certainly a small sample of the research activity on Medical Robotics going on around the globe as you read it, but it surely covers a good deal of what has been done in the field recently, and as such it works as a valuable source for researchers interested in the involved subjects, whether they are currently “medical roboticists” or not

    Augmentation Of Human Skill In Microsurgery

    Get PDF
    Surgeons performing highly skilled microsurgery tasks can benefit from information and manual assistance to overcome technological and physiological limitations to make surgery safer, efficient, and more successful. Vitreoretinal surgery is particularly difficult due to inherent micro-scale and fragility of human eye anatomy. Additionally, surgeons are challenged by physiological hand tremor, poor visualization, lack of force sensing, and significant cognitive load while executing high-risk procedures inside the eye, such as epiretinal membrane peeling. This dissertation presents the architecture and the design principles for a surgical augmentation environment which is used to develop innovative functionality to address the fundamental limitations in vitreoretinal surgery. It is an inherently information driven modular system incorporating robotics, sensors, and multimedia components. The integrated nature of the system is leveraged to create intuitive and relevant human-machine interfaces and generate a particular system behavior to provide active physical assistance and present relevant sensory information to the surgeon. These include basic manipulation assistance, audio-visual and haptic feedback, intraoperative imaging and force sensing. The resulting functionality, and the proposed architecture and design methods generalize to other microsurgical procedures. The system's performance is demonstrated and evaluated using phantoms and in vivo experiments

    Modular MRI Guided Device Development System: Development, Validation and Applications

    Get PDF
    Since the first robotic surgical intervention was performed in 1985 using a PUMA industrial manipulator, development in the field of surgical robotics has been relatively fast paced, despite the tremendous costs involved in developing new robotic interventional devices. This is due to the clear advantages to augmented a clinicians skill and dexterity with the precision and reliability of computer controlled motion. A natural extension of robotic surgical intervention is the integration of image guided interventions, which give the promise of reduced trauma, procedure time and inaccuracies. Despite magnetic resonance imaging (MRI) being one of the most effective imaging modalities for visualizing soft tissue structures within the body, MRI guided surgical robotics has been frustrated by the high magnetic field in the MRI image space and the extreme sensitivity to electromagnetic interference. The primary contributions of this dissertation relate to enabling the use of direct, live MR imaging to guide and assist interventional procedures. These are the two focus areas: creation both of an integrated MRI-guided development platform and of a stereotactic neural intervention system. The integrated series of modules of the development platform represent a significant advancement in the practice of creating MRI guided mechatronic devices, as well as an understanding of design requirements for creating actuated devices to operate within a diagnostic MRI. This knowledge was gained through a systematic approach to understanding, isolating, characterizing, and circumventing difficulties associated with developing MRI-guided interventional systems. These contributions have been validated on the levels of the individual modules, the total development system, and several deployed interventional devices. An overview of this work is presented with a summary of contributions and lessons learned along the way

    Proceedings of the NASA Conference on Space Telerobotics, volume 1

    Get PDF
    The theme of the Conference was man-machine collaboration in space. Topics addressed include: redundant manipulators; man-machine systems; telerobot architecture; remote sensing and planning; navigation; neural networks; fundamental AI research; and reasoning under uncertainty
    corecore