52 research outputs found

    Development and implementation of quadratically distorted (QD) grating and grisms system for 4D multi-colour microscopy imaging (MCMI)

    Get PDF
    The recent emergence of super-resolution microscopy imaging techniques has surpassed the diffraction limit to improve image resolution. Contrary to the breakthroughs of spatial resolution, high temporal resolution remains a challenge. This dissertation demonstrates a simple, on axis, 4D (3D + time) multi-colour microscopy imaging (MCMI) technology that delivers simultaneous 3D broadband imaging over cellular volumes, which is especially applicable to the real-time imaging of fast moving biospecimens. Quadratically distorted (QD) grating, in the form of an off axis-Fresnel zone plate, images multiple object planes simultaneously on a single image plane. A delicate mathematical model of 2D QD grating has been established and implemented in the design and optimization of QD grating. Grism, a blazed grating and prism combination, achieves chromatic control in the 4D multi-plane imaging. A pair of grisms, whose separation can be varied, provide a collimated beam with a tuneable chromatic shear from a collimated polychromatic input. The optical system based on QD grating and grisms has been simply appended to the camera port of a commercial microscope, and a few bioimaging tests have been performed, i.e. the 4D chromatically corrected imaging of fluorescence microspheres, MCF-7 and HeLa cells. Further investigation of bioimaging problems is still in progress

    High-dynamic-range Foveated Near-eye Display System

    Get PDF
    Wearable near-eye display has found widespread applications in education, gaming, entertainment, engineering, military training, and healthcare, just to name a few. However, the visual experience provided by current near-eye displays still falls short to what we can perceive in the real world. Three major challenges remain to be overcome: 1) limited dynamic range in display brightness and contrast, 2) inadequate angular resolution, and 3) vergence-accommodation conflict (VAC) issue. This dissertation is devoted to addressing these three critical issues from both display panel development and optical system design viewpoints. A high-dynamic-range (HDR) display requires both high peak brightness and excellent dark state. In the second and third chapters, two mainstream display technologies, namely liquid crystal display (LCD) and organic light emitting diode (OLED), are investigated to extend their dynamic range. On one hand, LCD can easily boost its peak brightness to over 1000 nits, but it is challenging to lower the dark state to \u3c 0.01 nits. To achieve HDR, we propose to use a mini-LED local dimming backlight. Based on our simulations and subjective experiments, we establish practical guidelines to correlate the device contrast ratio, viewing distance, and required local dimming zone number. On the other hand, self-emissive OLED display exhibits a true dark state, but boosting its peak brightness would unavoidably cause compromised lifetime. We propose a systematic approach to enhance OLED\u27s optical efficiency while keeping indistinguishable angular color shift. These findings will shed new light to guide future HDR display designs. In Chapter four, in order to improve angular resolution, we demonstrate a multi-resolution foveated display system with two display panels and an optical combiner. The first display panel provides wide field of view for peripheral vision, while the second panel offers ultra-high resolution for the central fovea. By an optical minifying system, both 4x and 5x enhanced resolutions are demonstrated. In addition, a Pancharatnam-Berry phase deflector is applied to actively shift the high-resolution region, in order to enable eye-tracking function. The proposed design effectively reduces the pixelation and screen-door effect in near-eye displays. The VAC issue in stereoscopic displays is believed to be the main cause of visual discomfort and fatigue when wearing VR headsets. In Chapter five, we propose a novel polarization-multiplexing approach to achieve multiplane display. A polarization-sensitive Pancharatnam-Berry phase lens and a spatial polarization modulator are employed to simultaneously create two independent focal planes. This method enables generation of two image planes without the need of temporal multiplexing. Therefore, it can effectively reduce the frame rate by one-half. In Chapter six, we briefly summarize our major accomplishments

    Functional anatomy of a visuomotor transformation in the optic tectum of zebrafish

    Get PDF

    Super-resolution microscopy live cell imaging and image analysis

    Get PDF
    Novel fundamental research results provided new techniques going beyond the diffraction limit. These recent advances known as super-resolution microscopy have been awarded by the Nobel Prize as they promise new discoveries in biology and live sciences. All these techniques rely on complex signal and image processing. The applicability in biology, and particularly for live cell imaging, remains challenging and needs further investigation. Focusing on image processing and analysis, the thesis is devoted to a significant enhancement of structured illumination microscopy (SIM) and super-resolution optical fluctuation imaging (SOFI)methods towards fast live cell and quantitative imaging. The thesis presents a novel image reconstruction method for both 2D and 3D SIM data, compatible with weak signals, and robust towards unwanted image artifacts. This image reconstruction is efficient under low light conditions, reduces phototoxicity and facilitates live cell observations. We demonstrate the performance of our new method by imaging long super-resolution video sequences of live U2-OS cells and improving cell particle tracking. We develop an adapted 3D deconvolution algorithm for SOFI, which suppresses noise and makes 3D SOFI live cell imaging feasible due to reduction of the number of required input images. We introduce a novel linearization procedure for SOFI maximizing the resolution gain and show that SOFI and PALM can both be applied on the same dataset revealing more insights about the sample. This PALM and SOFI concept provides an enlarged quantitative imaging framework, allowing unprecedented functional exploration of the sample through the estimation of molecular parameters. For quantifying the outcome of our super-resolutionmethods, the thesis presents a novel methodology for objective image quality assessment measuring spatial resolution and signal to noise ratio in real samples. We demonstrate our enhanced SOFI framework by high throughput 3D imaging of live HeLa cells acquiring the whole super-resolution 3D image in 0.95 s, by investigating focal adhesions in live MEF cells, by fast optical readout of fluorescently labelled DNA strands and by unraveling the nanoscale organization of CD4 proteins on a plasma membrane of T-cells. Within the thesis, unique open-source software packages SIMToolbox and SOFI simulation tool were developed to facilitate implementation of super-resolution microscopy methods

    Map-Based Localization for Unmanned Aerial Vehicle Navigation

    Get PDF
    Unmanned Aerial Vehicles (UAVs) require precise pose estimation when navigating in indoor and GNSS-denied / GNSS-degraded outdoor environments. The possibility of crashing in these environments is high, as spaces are confined, with many moving obstacles. There are many solutions for localization in GNSS-denied environments, and many different technologies are used. Common solutions involve setting up or using existing infrastructure, such as beacons, Wi-Fi, or surveyed targets. These solutions were avoided because the cost should be proportional to the number of users, not the coverage area. Heavy and expensive sensors, for example a high-end IMU, were also avoided. Given these requirements, a camera-based localization solution was selected for the sensor pose estimation. Several camera-based localization approaches were investigated. Map-based localization methods were shown to be the most efficient because they close loops using a pre-existing map, thus the amount of data and the amount of time spent collecting data are reduced as there is no need to re-observe the same areas multiple times. This dissertation proposes a solution to address the task of fully localizing a monocular camera onboard a UAV with respect to a known environment (i.e., it is assumed that a 3D model of the environment is available) for the purpose of navigation for UAVs in structured environments. Incremental map-based localization involves tracking a map through an image sequence. When the map is a 3D model, this task is referred to as model-based tracking. A by-product of the tracker is the relative 3D pose (position and orientation) between the camera and the object being tracked. State-of-the-art solutions advocate that tracking geometry is more robust than tracking image texture because edges are more invariant to changes in object appearance and lighting. However, model-based trackers have been limited to tracking small simple objects in small environments. An assessment was performed in tracking larger, more complex building models, in larger environments. A state-of-the art model-based tracker called ViSP (Visual Servoing Platform) was applied in tracking outdoor and indoor buildings using a UAVs low-cost camera. The assessment revealed weaknesses at large scales. Specifically, ViSP failed when tracking was lost, and needed to be manually re-initialized. Failure occurred when there was a lack of model features in the cameras field of view, and because of rapid camera motion. Experiments revealed that ViSP achieved positional accuracies similar to single point positioning solutions obtained from single-frequency (L1) GPS observations standard deviations around 10 metres. These errors were considered to be large, considering the geometric accuracy of the 3D model used in the experiments was 10 to 40 cm. The first contribution of this dissertation proposes to increase the performance of the localization system by combining ViSP with map-building incremental localization, also referred to as simultaneous localization and mapping (SLAM). Experimental results in both indoor and outdoor environments show sub-metre positional accuracies were achieved, while reducing the number of tracking losses throughout the image sequence. It is shown that by integrating model-based tracking with SLAM, not only does SLAM improve model tracking performance, but the model-based tracker alleviates the computational expense of SLAMs loop closing procedure to improve runtime performance. Experiments also revealed that ViSP was unable to handle occlusions when a complete 3D building model was used, resulting in large errors in its pose estimates. The second contribution of this dissertation is a novel map-based incremental localization algorithm that improves tracking performance, and increases pose estimation accuracies from ViSP. The novelty of this algorithm is the implementation of an efficient matching process that identifies corresponding linear features from the UAVs RGB image data and a large, complex, and untextured 3D model. The proposed model-based tracker improved positional accuracies from 10 m (obtained with ViSP) to 46 cm in outdoor environments, and improved from an unattainable result using VISP to 2 cm positional accuracies in large indoor environments. The main disadvantage of any incremental algorithm is that it requires the camera pose of the first frame. Initialization is often a manual process. The third contribution of this dissertation is a map-based absolute localization algorithm that automatically estimates the camera pose when no prior pose information is available. The method benefits from vertical line matching to accomplish a registration procedure of the reference model views with a set of initial input images via geometric hashing. Results demonstrate that sub-metre positional accuracies were achieved and a proposed enhancement of conventional geometric hashing produced more correct matches - 75% of the correct matches were identified, compared to 11%. Further the number of incorrect matches was reduced by 80%

    Robotic Ultrasound Imaging: State-of-the-Art and Future Perspectives

    Full text link
    Ultrasound (US) is one of the most widely used modalities for clinical intervention and diagnosis due to the merits of providing non-invasive, radiation-free, and real-time images. However, free-hand US examinations are highly operator-dependent. Robotic US System (RUSS) aims at overcoming this shortcoming by offering reproducibility, while also aiming at improving dexterity, and intelligent anatomy and disease-aware imaging. In addition to enhancing diagnostic outcomes, RUSS also holds the potential to provide medical interventions for populations suffering from the shortage of experienced sonographers. In this paper, we categorize RUSS as teleoperated or autonomous. Regarding teleoperated RUSS, we summarize their technical developments, and clinical evaluations, respectively. This survey then focuses on the review of recent work on autonomous robotic US imaging. We demonstrate that machine learning and artificial intelligence present the key techniques, which enable intelligent patient and process-specific, motion and deformation-aware robotic image acquisition. We also show that the research on artificial intelligence for autonomous RUSS has directed the research community toward understanding and modeling expert sonographers' semantic reasoning and action. Here, we call this process, the recovery of the "language of sonography". This side result of research on autonomous robotic US acquisitions could be considered as valuable and essential as the progress made in the robotic US examination itself. This article will provide both engineers and clinicians with a comprehensive understanding of RUSS by surveying underlying techniques.Comment: Accepted by Medical Image Analysi

    Functional anatomy of a visuomotor transformation in the optic tectum of zebrafish

    Get PDF
    Animals detect sensory cues in their environment and process this information in order to carry out adaptive behavioral responses. To generate target - directed movements, the brain transforms structured sensory inputs into coordinated motor com mands. Some of these behaviors, such as escaping from a predator or approaching a prey, need to be fast and reproducible. The optic tectum of vertebrates (named "superior colliculus" in mammals) is the main target of visual information and is known to play a pivotal role in these kinds of visuomotor transformation. In my dissertation, I investigated the neuronal circuits that map visual cues to motor commands, with a focus on the axonal projections that connect the tectum to premotor areas of the tegmentum and hindbrain. To address these questions, I developed and combined several techniques to link functional information and anatomy to behavior. The animal I chose for my studies is the zebrafish larva, which is amenable to transgenesis, optical imaging appr oaches, optogenetics and behavioral recordings in virtual reality arenas. In a first study, I designed, generated and characterized BAC transgenic lines, which allow gene - specific labelling of neurons and intersectional genetics using Cre - mediated recombination. Importantly, I generated a pan - neuronal line that facilitates brain registrations in order to compare different expression patterns (Förster et al., 2017 b ). In a second project , I contributed to the development of an approach that combines t wo - photon holographic optogenetic stimulation with whole brain calcium imaging, behavior tracking and morphological reconstruction. In this study, I designed the protocol to reveal the anatomical identity of optogenetically targeted individual neurons (dal Maschio et al., 2017). In a third project, I took advantage of some of these methods, including whole - brain calcium imaging, optogenetics and brain registrations, to elucidate how the tectum is wired to make behavioral decisions and to steer behavior dire ctionality. The results culminated in a third manuscript (Helmbrecht et al., submitted), which reported four main findings. First, I optogenetically demonstrated a retinotopic organization of the tectal motor map in zebrafish larvae. Second, I generated a tectal "projectome" with cellular resolution, by reconstructing and registering stochastically labeled tectal projection neurons. Third, by employing this anatomical atlas to interpret functional imaging data, I asked whether visual information leaves the tectum via distinct projection neurons. This revealed that two distinct uncrossed tectobulbar pathways (ipsilateral tectobulbar tract, iTB) are involved in either avoidance (medial iTB, iTB - M) or approach (lateral iTB, iTB - L) behavior. Finally, I showed th at the location of a prey - like object, and therefore the direction of orientation swims towards prey, is functionally encoded in iTB - L projection neurons. In summary, I demonstrated in this thesis how refined genetic and optical methods can be used to stu dy neuronal circuits with cellular and subcellular resolution. Importantly, apart from the biological findings on the visuomotor transformation, these newly developed tools can be broadly employed to link brain anatomy to circuit activity and behavior
    corecore