1,002 research outputs found

    Model-Based Environmental Visual Perception for Humanoid Robots

    Get PDF
    The visual perception of a robot should answer two fundamental questions: What? and Where? In order to properly and efficiently reply to these questions, it is essential to establish a bidirectional coupling between the external stimuli and the internal representations. This coupling links the physical world with the inner abstraction models by sensor transformation, recognition, matching and optimization algorithms. The objective of this PhD is to establish this sensor-model coupling

    Adaptive beam control and analysis in fluorescence microscopy

    Get PDF
    This thesis details three novel advances in instrumentation that are each related to performance improvement in wide-field visible-spectrum imaging systems. In each case our solution concerns the assessment and improvement of optical imaging quality. The three instruments are as follows: The first is a portable transmission microscope which is able to correct for artificially induced aberrations using adaptive optics (AO). The specimens and the method of introducing aberrations into the optical system can be altered to simulate the performance of AO-correction in both astronomical and biological imaging. We present the design and construction of the system alongside before-and-after AO-correction images for simulated astronomical and biological images. The second instrument is a miniature endoscope camera sensor we re-purposed for use as a quantitative beam analysis probe using a custom high dynamic range (HDR) imaging and reconstruction procedure. This allowed us to produce quantitative flux maps of the illumination beam intensity profile within several operational fluorescence microscope systems. The third and final project in this thesis was concerned with an adaptive modification to the light sheet illumination beam used in light sheet microscopy, specifically for a single plane illumination microscope (SPIM), embracing the trade-off between the thickness of the light sheet and its extent across the detection field-of-view. The focal region of the beam was made as small as possible and then matched to the shape of curved features within a biological specimen by using a spatial light modulator (SLM) to alter the light sheet focal length throughout the vertical span of the sheet. We used the HDR beam profiling camera probe mentioned earlier to assess the focal shape and quality of the beam. The resulting illumination beam may in the future be used in a modified SPIM system to produce fluorescence microscope images with enhanced optical sectioning of specific curved features

    3D BrachyView System

    Get PDF
    Prostate cancer is quickly becoming the most common form of cancer across the globe, and is commonly treated with low dose rate brachytherapy due to its curative measures and highly conformal dose delivery. It is important to ensure there is a means of real time monitoring of the dose and seed placements when radioactive seeds are implanted in the prostate gland during a low dose rate brachytherapy treatment. The BrachyView system presents as a unique system that provides the capability of 3D seed reconstruction within an intraoperative setting. In this thesis the BrachyView system is tested for its suitability, accuracy and the system is further developed so that its application in real-time intraoperative dosime-try can become a reality. The system was tested with a clinically relevant number of seeds, 98, where previously the system had only been tested with a maximum number of 30 seeds. The BrachyView system was able to reconstruct 91.8% of implanted seeds from the 98 seed dataset with an average overall discrepancy of 3.65 mm without the application of the baseline subtraction algorithm, however with its application to the data the detection efficiency was improved to 100% and an overall positional accuracy of 11.5%, correlating to a reduced overall discrepancy of 3.23 mm, was noted. It was found that with seed numbers of 30 or lower that the addition of a background subtrac-tion algorithm was not necessary, whereas for datasets containing a clinically relevant number of seeds the application of a background subtraction algorithm was paramount to reducing the noise, scatter and means for identification of newly implanted seeds that may be masked by those seed previously implanted

    A Modular and Open-Source Framework for Virtual Reality Visualisation and Interaction in Bioimaging

    Get PDF
    Life science today involves computational analysis of a large amount and variety of data, such as volumetric data acquired by state-of-the-art microscopes, or mesh data from analysis of such data or simulations. The advent of new imaging technologies, such as lightsheet microscopy, has resulted in the users being confronted with an ever-growing amount of data, with even terabytes of imaging data created within a day. With the possibility of gentler and more high-performance imaging, the spatiotemporal complexity of the model systems or processes of interest is increasing as well. Visualisation is often the first step in making sense of this data, and a crucial part of building and debugging analysis pipelines. It is therefore important that visualisations can be quickly prototyped, as well as developed or embedded into full applications. In order to better judge spatiotemporal relationships, immersive hardware, such as Virtual or Augmented Reality (VR/AR) headsets and associated controllers are becoming invaluable tools. In this work we present scenery, a modular and extensible visualisation framework for the Java VM that can handle mesh and large volumetric data, containing multiple views, timepoints, and color channels. scenery is free and open-source software, works on all major platforms, and uses the Vulkan or OpenGL rendering APIs. We introduce scenery's main features, and discuss its use with VR/AR hardware and in distributed rendering. In addition to the visualisation framework, we present a series of case studies, where scenery can provide tangible benefit in developmental and systems biology: With Bionic Tracking, we demonstrate a new technique for tracking cells in 4D volumetric datasets via tracking eye gaze in a virtual reality headset, with the potential to speed up manual tracking tasks by an order of magnitude. We further introduce ideas to move towards virtual reality-based laser ablation and perform a user study in order to gain insight into performance, acceptance and issues when performing ablation tasks with virtual reality hardware in fast developing specimen. To tame the amount of data originating from state-of-the-art volumetric microscopes, we present ideas how to render the highly-efficient Adaptive Particle Representation, and finally, we present sciview, an ImageJ2/Fiji plugin making the features of scenery available to a wider audience.:Abstract Foreword and Acknowledgements Overview and Contributions Part 1 - Introduction 1 Fluorescence Microscopy 2 Introduction to Visual Processing 3 A Short Introduction to Cross Reality 4 Eye Tracking and Gaze-based Interaction Part 2 - VR and AR for System Biology 5 scenery — VR/AR for Systems Biology 6 Rendering 7 Input Handling and Integration of External Hardware 8 Distributed Rendering 9 Miscellaneous Subsystems 10 Future Development Directions Part III - Case Studies C A S E S T U D I E S 11 Bionic Tracking: Using Eye Tracking for Cell Tracking 12 Towards Interactive Virtual Reality Laser Ablation 13 Rendering the Adaptive Particle Representation 14 sciview — Integrating scenery into ImageJ2 & Fiji Part IV - Conclusion 15 Conclusions and Outlook Backmatter & Appendices A Questionnaire for VR Ablation User Study B Full Correlations in VR Ablation Questionnaire C Questionnaire for Bionic Tracking User Study List of Tables List of Figures Bibliography Selbstständigkeitserklärun

    Correlation of BOLD signal and kinematics during a reach-to-grasp movement

    Get PDF
    Lo scopo della presente tesi è stato quello di indagare la correlazione tra l'attività neurale misurata dal segnale BOLD tramite la tecnica fMRI e il segnale cinematico proveniente dall'arto superiore in movimento. Al fine di raggiungere tale obiettivo, e' stato utilizzato un sistema di motion capture integrato con uno scanner di risonanza magnetica. I risultati dell'analisi di gruppo evidenziano la presenza di una correlazione positiva tra il segnale BOLD e la massima apertura pollice-indic

    Remote Visual Observation of Real Places Through Virtual Reality Headsets

    Get PDF
    Virtual Reality has always represented a fascinating yet powerful opportunity that has attracted studies and technology developments, especially since the latest release on the market of powerful high-resolution and wide field-of-view VR headsets. While the great potential of such VR systems is common and accepted knowledge, issues remain related to how to design systems and setups capable of fully exploiting the latest hardware advances. The aim of the proposed research is to study and understand how to increase the perceived level of realism and sense of presence when remotely observing real places through VR headset displays. Hence, to produce a set of guidelines that give directions to system designers about how to optimize the display-camera setup to enhance performance, focusing on remote visual observation of real places. The outcome of this investigation represents unique knowledge that is believed to be very beneficial for better VR headset designs towards improved remote observation systems. To achieve the proposed goal, this thesis presents a thorough investigation of existing literature and previous researches, which is carried out systematically to identify the most important factors ruling realism, depth perception, comfort, and sense of presence in VR headset observation. Once identified, these factors are further discussed and assessed through a series of experiments and usability studies, based on a predefined set of research questions. More specifically, the role of familiarity with the observed place, the role of the environment characteristics shown to the viewer, and the role of the display used for the remote observation of the virtual environment are further investigated. To gain more insights, two usability studies are proposed with the aim of defining guidelines and best practices. The main outcomes from the two studies demonstrate that test users can experience an enhanced realistic observation when natural features, higher resolution displays, natural illumination, and high image contrast are used in Mobile VR. In terms of comfort, simple scene layouts and relaxing environments are considered ideal to reduce visual fatigue and eye strain. Furthermore, sense of presence increases when observed environments induce strong emotions, and depth perception improves in VR when several monocular cues such as lights and shadows are combined with binocular depth cues. Based on these results, this investigation then presents a focused evaluation on the outcomes and introduces an innovative eye-adapted High Dynamic Range (HDR) approach, which the author believes to be of great improvement in the context of remote observation when combined with eye-tracked VR headsets. Within this purpose, a third user study is proposed to compare static HDR and eye-adapted HDR observation in VR, to assess that the latter can improve realism, depth perception, sense of presence, and in certain cases even comfort. Results from this last study confirmed the author expectations, proving that eye-adapted HDR and eye tracking should be used to achieve best visual performances for remote observation in modern VR systems

    CT-Image Guided Brachytherapy

    Get PDF

    Integrated processing of photogrammetric and laser scanning data for frescoes restoration

    Get PDF
    The integration of photogrammetry and Terrestrial Laser Scanner (TLS) techniques is often desirable for Cultural Heritage digitization, especially when high metric and radiometric accuracy is required, as for the documentation and restoration of frescoed spaces. Despite the many technological and methodological advances in both techniques, their full integration is still not straightforward. The paper investigates a methodology where TLS and photogrammetric data are processed together through an image matching process between RGB panoramas acquired by the scanner’s integrated camera and frame imagery acquired through photographic equipment. The co-registration is performed without any Ground Control Point (GCP) but using the automatically extracted tie points and the known Exterior Orientation parameters of the panoramas (gathered from TLS data original registration) to set the ground reference. The procedure allowed for effective integrated processing with the possibility of take benefit from TLS and photogrammetry pros and demonstrated to be reliable even with low overlap between photogrammetric images
    corecore