49,522 research outputs found

    Laser Based Mid-Infrared Spectroscopic Imaging – Exploring a Novel Method for Application in Cancer Diagnosis

    Get PDF
    A number of biomedical studies have shown that mid-infrared spectroscopic images can provide both morphological and biochemical information that can be used for the diagnosis of cancer. Whilst this technique has shown great potential it has yet to be employed by the medical profession. By replacing the conventional broadband thermal source employed in modern FTIR spectrometers with high-brightness, broadly tuneable laser based sources (QCLs and OPGs) we aim to solve one of the main obstacles to the transfer of this technology to the medical arena; namely poor signal to noise ratios at high spatial resolutions and short image acquisition times. In this thesis we take the first steps towards developing the optimum experimental configuration, the data processing algorithms and the spectroscopic image contrast and enhancement methods needed to utilise these high intensity laser based sources. We show that a QCL system is better suited to providing numerical absorbance values (biochemical information) than an OPG system primarily due to the QCL pulse stability. We also discuss practical protocols for the application of spectroscopic imaging to cancer diagnosis and present our spectroscopic imaging results from our laser based spectroscopic imaging experiments of oesophageal cancer tissue

    A multi-object spectral imaging instrument

    Get PDF
    We have developed a snapshot spectral imaging system which fits onto the side camera port of a commercial inverted microscope. The system provides spectra, in real time, from multiple points randomly selected on the microscope image. Light from the selected points in the sample is directed from the side port imaging arm using a digital micromirror device to a spectrometer arm based on a dispersing prism and CCD camera. A multi-line laser source is used to calibrate the pixel positions on the CCD for wavelength. A CMOS camera on the front port of the microscope allows the full image of the sample to be displayed and can also be used for particle tracking, providing spectra of multiple particles moving in the sample. We demonstrate the system by recording the spectra of multiple fluorescent beads in aqueous solution and from multiple points along a microscope sample channel containing a mixture of red and blue dye

    Detection of leaf structures in close-range hyperspectral images using morphological fusion

    Get PDF
    Close-range hyperspectral images are a promising source of information in plant biology, in particular, for in vivo study of physiological changes. In this study, we investigate how data fusion can improve the detection of leaf elements by combining pixel reflectance and morphological information. The detection of image regions associated to the leaf structures is the first step toward quantitative analysis on the physical effects that genetic manipulation, disease infections, and environmental conditions have in plants. We tested our fusion approach on Musa acuminata (banana) leaf images and compared its discriminant capability to similar techniques used in remote sensing. Experimental results demonstrate the efficiency of our fusion approach, with significant improvements over some conventional methods

    Computationally Efficient Target Classification in Multispectral Image Data with Deep Neural Networks

    Full text link
    Detecting and classifying targets in video streams from surveillance cameras is a cumbersome, error-prone and expensive task. Often, the incurred costs are prohibitive for real-time monitoring. This leads to data being stored locally or transmitted to a central storage site for post-incident examination. The required communication links and archiving of the video data are still expensive and this setup excludes preemptive actions to respond to imminent threats. An effective way to overcome these limitations is to build a smart camera that transmits alerts when relevant video sequences are detected. Deep neural networks (DNNs) have come to outperform humans in visual classifications tasks. The concept of DNNs and Convolutional Networks (ConvNets) can easily be extended to make use of higher-dimensional input data such as multispectral data. We explore this opportunity in terms of achievable accuracy and required computational effort. To analyze the precision of DNNs for scene labeling in an urban surveillance scenario we have created a dataset with 8 classes obtained in a field experiment. We combine an RGB camera with a 25-channel VIS-NIR snapshot sensor to assess the potential of multispectral image data for target classification. We evaluate several new DNNs, showing that the spectral information fused together with the RGB frames can be used to improve the accuracy of the system or to achieve similar accuracy with a 3x smaller computation effort. We achieve a very high per-pixel accuracy of 99.1%. Even for scarcely occurring, but particularly interesting classes, such as cars, 75% of the pixels are labeled correctly with errors occurring only around the border of the objects. This high accuracy was obtained with a training set of only 30 labeled images, paving the way for fast adaptation to various application scenarios.Comment: Presented at SPIE Security + Defence 2016 Proc. SPIE 9997, Target and Background Signatures I

    Uncertainty-Aware Organ Classification for Surgical Data Science Applications in Laparoscopy

    Get PDF
    Objective: Surgical data science is evolving into a research field that aims to observe everything occurring within and around the treatment process to provide situation-aware data-driven assistance. In the context of endoscopic video analysis, the accurate classification of organs in the field of view of the camera proffers a technical challenge. Herein, we propose a new approach to anatomical structure classification and image tagging that features an intrinsic measure of confidence to estimate its own performance with high reliability and which can be applied to both RGB and multispectral imaging (MI) data. Methods: Organ recognition is performed using a superpixel classification strategy based on textural and reflectance information. Classification confidence is estimated by analyzing the dispersion of class probabilities. Assessment of the proposed technology is performed through a comprehensive in vivo study with seven pigs. Results: When applied to image tagging, mean accuracy in our experiments increased from 65% (RGB) and 80% (MI) to 90% (RGB) and 96% (MI) with the confidence measure. Conclusion: Results showed that the confidence measure had a significant influence on the classification accuracy, and MI data are better suited for anatomical structure labeling than RGB data. Significance: This work significantly enhances the state of art in automatic labeling of endoscopic videos by introducing the use of the confidence metric, and by being the first study to use MI data for in vivo laparoscopic tissue classification. The data of our experiments will be released as the first in vivo MI dataset upon publication of this paper.Comment: 7 pages, 6 images, 2 table

    Learning Wavefront Coding for Extended Depth of Field Imaging

    Get PDF
    Depth of field is an important factor of imaging systems that highly affects the quality of the acquired spatial information. Extended depth of field (EDoF) imaging is a challenging ill-posed problem and has been extensively addressed in the literature. We propose a computational imaging approach for EDoF, where we employ wavefront coding via a diffractive optical element (DOE) and we achieve deblurring through a convolutional neural network. Thanks to the end-to-end differentiable modeling of optical image formation and computational post-processing, we jointly optimize the optical design, i.e., DOE, and the deblurring through standard gradient descent methods. Based on the properties of the underlying refractive lens and the desired EDoF range, we provide an analytical expression for the search space of the DOE, which is instrumental in the convergence of the end-to-end network. We achieve superior EDoF imaging performance compared to the state of the art, where we demonstrate results with minimal artifacts in various scenarios, including deep 3D scenes and broadband imaging
    • …
    corecore