53 research outputs found

    EFFECT OF LITHIUM COATING ON THE IMPURITIES AND SHIELDING EFFECT OF PLASMA ON THE RESONANT MAGNETIC PERTURBATIONS FIELD IN THE STOR-M TOKAMAK PLASMA

    Get PDF
    Effects of lithium coating of the chamber wall on the impurities in the STOR-M tokamak plasma were studied in this thesis work. Impurities have been identified as one of the major concerns since the beginning of tokamak plasma research, as they enhance the radiation losses and prevent plasma from being heated to a desired high temperature. The radiation losses are primarily due to line radiation from incomplete stripped impurity ions. Impurities are introduced into the plasma from the walls of the tokamak due to plasma-wall interactions, and the type of impurities observed in a tokamak is partially determined by the kind of material used for the tokamak chamber wall and the gases absorbed in the wall. In the STOR-M tokamak, inner surface walls are made of bare stainless steel, and the major impurities observed are from carbon and oxygen. The emission lines from these impurities are in the visible range of the electromagnetic spectrum. They are CIII which is observable at 464.74 nm, CVI at 529.05 nm, and OV at 650.02 nm. Before the chamber was coated with lithium, the intensities of the impurity lines were measured and then compared to the intensities after the lithiumization of the chamber wall. The intensities of the impurity lines were recorded during the stable period of plasma before and after the lithium coating using a spectrometer and an intensified charge-coupled device (ICCD) camera. It was observed that the intensities of the impurities reduced during the discharges immediately after the lithium coating. Further experimental analysis revealed that the freshly coated lithium caused plasma density to decrease, and increase after 300 plasma discharge shots. It was also found that after 600 and 900 plasma discharge shots, lithium coating does not appear to play any role in the reduction of the impurity intensities, but repetitive plasma discharge cleaning may be responsible for the decrease in the impurity intensities. In another experiment, an internal radial magnetic probe array was used to investigate effects of plasma and tokamak chamber wall on resonant magnetic perturbation (RMP) field applied externally to plasma. An internal magnetic probe array was used to measure the magnetic field at four radial locations at plasma edge after the application of RMP current. The plasma response magnetic field measured was subtracted from the vacuum field measured when RMP current was fired without plasma. The time delay caused by the plasma and tokamak chamber wall to the RMP field was also studied by calculating the difference between the RMP current waveform peak time and the magnetic field waveforms peak times in plasma. It was observed that RMP field in vacuum was 50% larger than the RMP field in plasma, and the penetration time of the RMP fields decreased as they penetrate through the vacuum wall into the plasma. The RMP field was found to travel faster in plasma than in vacuum

    Panoramic Stereovision and Scene Reconstruction

    Get PDF
    With advancement of research in robotics and computer vision, an increasingly high number of applications require the understanding of a scene in three dimensions. A variety of systems are deployed to do the same. This thesis explores a novel 3D imaging technique. This involves the use of catadioptric cameras in a stereoscopic arrangement. A secondary system aims to stabilize the system in the event that the cameras are misaligned during operation. The system provides a stark advantage due to it being a cost effective alternative to present day standard state-of-the-art systems that achieve the same goal of 3D imaging. The compromise lies in the quality of depth estimation, which can be overcome with a different imager and calibration. The result was a panoramic disparity map generated by the system

    Large-Scale Light Field Capture and Reconstruction

    Get PDF
    This thesis discusses approaches and techniques to convert Sparsely-Sampled Light Fields (SSLFs) into Densely-Sampled Light Fields (DSLFs), which can be used for visualization on 3DTV and Virtual Reality (VR) devices. Exemplarily, a movable 1D large-scale light field acquisition system for capturing SSLFs in real-world environments is evaluated. This system consists of 24 sparsely placed RGB cameras and two Kinect V2 sensors. The real-world SSLF data captured with this setup can be leveraged to reconstruct real-world DSLFs. To this end, three challenging problems require to be solved for this system: (i) how to estimate the rigid transformation from the coordinate system of a Kinect V2 to the coordinate system of an RGB camera; (ii) how to register the two Kinect V2 sensors with a large displacement; (iii) how to reconstruct a DSLF from a SSLF with moderate and large disparity ranges. To overcome these three challenges, we propose: (i) a novel self-calibration method, which takes advantage of the geometric constraints from the scene and the cameras, for estimating the rigid transformations from the camera coordinate frame of one Kinect V2 to the camera coordinate frames of 12-nearest RGB cameras; (ii) a novel coarse-to-fine approach for recovering the rigid transformation from the coordinate system of one Kinect to the coordinate system of the other by means of local color and geometry information; (iii) several novel algorithms that can be categorized into two groups for reconstructing a DSLF from an input SSLF, including novel view synthesis methods, which are inspired by the state-of-the-art video frame interpolation algorithms, and Epipolar-Plane Image (EPI) inpainting methods, which are inspired by the Shearlet Transform (ST)-based DSLF reconstruction approaches

    Computational miniature mesoscope for large-scale 3D fluorescence imaging

    Full text link
    Fluorescence imaging is indispensable to biology and neuroscience. The need for large-scale imaging in freely behaving animals has further driven the development in miniaturized microscopes (miniscopes). However, conventional microscopes and miniscopes are inherently constrained by their limited space-bandwidth-product, shallow depth-of-field, and inability to resolve 3D distributed emitters such as neurons. In this thesis, I present a Computational Miniature Mesoscope (CM2) that leverages two computation frameworks to overcome these bottlenecks and enable single-shot 3D imaging across a wide imaging field-of-view (FOV) of 7~8 mm and an extended depth-of-field (DOF) of 0.8~2.5 mm with a high lateral (7 um) and axial resolution (25 um). The CM2 is a novel fluorescence imaging device that achieves large-scale illumination and single-shot 3D imaging on a compact platform. This expanded imaging capability is enabled by computational imaging that jointly designs optics and algorithms. In this thesis, I present two versions of CM2 platforms and two 3D reconstruction algorithms. In addition, pilot studies of in vivo imaging experiments using a wearable CM2 prototype are conducted to demonstrate the CM2 platform's potential applications in large-scale neural imaging. First, I present the CM2 V1 platform and a model-based 3D reconstruction algorithm. The CM2 V1 system has a compact lightweight design that integrates a microlens array (MLA) for 3D imaging and an LED array for excitation on a single compact platform. The model-based 3D deconvolution algorithm is developed to perform volumetric reconstructions from single-shot CM2 measurements, achieving 7 um lateral and 200 um axial resolution across a wide 8 mm FOV and 2.5 mm DOF in clear volumes. This mesoscale 3D imaging capability of CM2 is validated on various fluorescent samples, including resolution target, fibers, and particle phantoms in different geometry. I further quantify the effects of bulk scattering and background fluorescence in phantom experiments. Next, I investigate and improve the CM2 V1 system for both the hardware and the reconstruction algorithm. Specially, the low axial resolution (200 um), insufficient excitation efficiency (24%), and heavy computational cost of the model-based 3D deconvolution hinder CM2 V1's biomedical applications. I present and demonstrate an upgraded CM2 V2 platform augmented with a deep learning-based 3D reconstruction framework, termed CM2Net, to address the above limitations. Specially, the CM2 V2 design features an array of freeform illuminators and hybrid emission filters to achieve 3 times higher excitation efficiency (80%) and 5 times better suppression of background fluorescence, compared to the V1 design. The multi-stage CM2Net combines ideas from view demixing, lightfield refocusing and view synthesis to account for the CM2’s multi-view geometry and achieve reliable 3D reconstruction with high axial resolution. Finally, trained purely on simulated data, I show that the CM2Net can generalize to experimental measurements. A key element of CM2Net's generalizability is a 3D Linear Shift Variant (LSV) model of CM2 that simulates realistic measurements by accurately incorporating field varying aberrations. I experimentally validate the CM2 V2 platform and CM2Net achieve faster, artifact-free 3D reconstructions across a 7 mm wide FOV and 800 um DOF with 25 um axial and 7 um lateral resolution in phantom experiments. Compared to the CM2 V1 with model-based deconvolution, the CM2Net achieves a 10 times better axial resolution at 1400 times faster reconstruction speed without sacrificing the imaging FOV or lateral resolution. The new system design of CM2 V2 with the LSV-embedded CM2Net provides an intriguing solution to large-scale fluorescence imagers with a small form factor. Built from off-the-shelf and 3D printed components, I envision that this low-cost and compact computational imaging system can be adopted in various biomedical and neuroscience labs. The CM2 systems and the developed computational tools can have impact in a wide range of large-scale 3D fluorescence imaging applications

    From Calibration to Large-Scale Structure from Motion with Light Fields

    Get PDF
    Classic pinhole cameras project the multi-dimensional information of the light flowing through a scene onto a single 2D snapshot. This projection limits the information that can be reconstructed from the 2D acquisition. Plenoptic (or light field) cameras, on the other hand, capture a 4D slice of the plenoptic function, termed the “light field”. These cameras provide both spatial and angular information on the light flowing through a scene; multiple views are captured in a single photographic exposure facilitating various applications. This thesis is concerned with the modelling of light field (or plenoptic) cameras and the development of structure from motion pipelines using such cameras. Specifically, we develop a geometric model for a multi-focus plenoptic camera, followed by a complete pipeline for the calibration of the suggested model. Given a calibrated light field camera, we then remap the captured light field to a grid of pinhole images. We use these images to obtain metric 3D reconstruction through a novel framework for structure from motion with light fields. Finally, we suggest a linear and efficient approach for absolute pose estimation for light fields

    Concentric mosaic(s), planar motion and 1D cameras

    Get PDF
    International audienceGeneral SFM methods give poor results for images captured by constrained motions such as planar motion of concentric mosaics (CM). In this paper, we propose new SFM algorithms for both images captured by CM and composite mosaic images from CM. We first introduce 1D affine camera model for completing 1D camera models. Then we show that a 2D image captured by CM can be decoupled into two 1D images: one 1D projective and one 1D affine; a composite mosaic image can by rebinned into a calibrated 1D panorama projective camera. Finally we describe subspace reconstruction methods and demonstrate both in theory and experiments the advantage of the decomposition method over the general SFM methods by incorporating the constrained motion into the earliest stage of motion analysis

    A review of protocols for Fiducial Reference Measurements of downwelling irradiance for the validation of satellite remote sensing data over water

    Get PDF
    This paper reviews the state of the art of protocols for the measurement of downwelling irradiance in the context of Fiducial Reference Measurements (FRM) of water reflectance for satellite validation. The measurement of water reflectance requires the measurement of water-leaving radiance and downwelling irradiance just above water. For the latter, there are four generic families of method, using: (1) an above-water upward-pointing irradiance sensor; (2) an above-water downward-pointing radiance sensor and a reflective plaque; (3) a Sun-pointing radiance sensor (sunphotometer); or (4) an underwater upward-pointing irradiance sensor deployed at different depths. Each method-except for the fourth, which is considered obsolete for the measurement of above-water downwelling irradiance-is described generically in the FRM context with reference to the measurement equation, documented implementations, and the intra-method diversity of deployment platform and practice. Ideal measurement conditions are stated, practical recommendations are provided on best practice, and guidelines for estimating the measurement uncertainty are provided for each protocol-related component of the measurement uncertainty budget. The state of the art for the measurement of downwelling irradiance is summarized, future perspectives are outlined, and key debates such as the use of reflectance plaques with calibrated or uncalibrated radiometers are presented. This review is based on the practice and studies of the aquatic optics community and the validation of water reflectance, but is also relevant to land radiation monitoring and the validation of satellite-derived land surface reflectance

    Absolute depth using low-cost light field cameras

    Get PDF
    Digital cameras are increasingly used for measurement tasks within engineering scenarios, often being part of metrology platforms. Existing cameras are well equipped to provide 2D information about the fields of view (FOV) they observe, the objects within the FOV, and the accompanying environments. But for some applications these 2D results are not sufficient, specifically applications that require Z dimensional data (depth data) along with the X and Y dimensional data. New designs of camera systems have previously been developed by integrating multiple cameras to provide 3D data, ranging from 2 camera photogrammetry to multiple camera stereo systems. Many earlier attempts to record 3D data on 2D sensors have been completed, and likewise many research groups around the world are currently working on camera technology but from different perspectives; computer vision, algorithm development, metrology, etc. Plenoptic or Lightfield camera technology was defined as a technique over 100 years ago but has remained dormant as a potential metrology instrument. Lightfield cameras utilize an additional Micro Lens Array (MLA) in front of the imaging sensor, to create multiple viewpoints of the same scene and allow encoding of depth information. A small number of companies have explored the potential of lightfield cameras, but in the majority, these have been aimed at domestic consumer photography, only ever recording scenes as relative scale greyscale images. This research considers the potential for lightfield cameras to be used for world scene metrology applications, specifically to record absolute coordinate data. Specific interest has been paid to a range of low cost lightfield cameras to; understand the functional/behavioural characteristics of the optics, identify potential need for optical and/or algorithm development, define sensitivity, repeatability and accuracy characteristics and limiting thresholds of use, and allow quantified 3D absolute scale coordinate data to be extracted from the images. The novel output of this work is; an analysis of lightfield camera system sensitivity leading to the definition of Active Zones (linear data generation good data) and In-active Zones (non-linear data generation poor data), development of bespoke calibration algorithms that remove radial/tangential distortion from the data captured using any MLA based camera, and, a light field camera independent algorithm that allows the delivery of 3D coordinate data in absolute units within a well-defined measurable range from a given camera
    • …
    corecore