73 research outputs found

    Algorithms for enhanced artifact reduction and material recognition in computed tomography

    Full text link
    Computed tomography (CT) imaging provides a non-destructive means to examine the interior of an object which is a valuable tool in medical and security applications. The variety of materials seen in the security applications is higher than in the medical applications. Factors such as clutter, presence of dense objects, and closely placed items in a bag or a parcel add to the difficulty of the material recognition in security applications. Metal and dense objects create image artifacts which degrade the image quality and deteriorate the recognition accuracy. Conventional CT machines scan the object using single source or dual source spectra and reconstruct the effective linear attenuation coefficient of voxels in the image which may not provide the sufficient information to identify the occupying materials. In this dissertation, we provide algorithmic solutions to enhance CT material recognition. We provide a set of algorithms to accommodate different classes of CT machines. First, we provide a metal artifact reduction algorithm for conventional CT machines which perform the measurements using single X-ray source spectrum. Compared to previous methods, our algorithm is robust to severe metal artifacts and accurately reconstructs the regions that are in proximity to metal. Second, we propose a novel joint segmentation and classification algorithm for dual-energy CT machines which extends prior work to capture spatial correlation in material X-ray attenuation properties. We show that the classification performance of our method surpasses the prior work's result. Third, we propose a new framework for reconstruction and classification using a new class of CT machines known as spectral CT which has been recently developed. Spectral CT uses multiple energy windows to scan the object, thus it captures data across higher energy dimensions per detector. Our reconstruction algorithm extracts essential features from the measured data by using spectral decomposition. We explore the effect of using different transforms in performing the measurement decomposition and we develop a new basis transform which encapsulates the sufficient information of the data and provides high classification accuracy. Furthermore, we extend our framework to perform the task of explosive detection. We show that our framework achieves high detection accuracy and it is robust to noise and variations. Lastly, we propose a combined algorithm for spectral CT, which jointly reconstructs images and labels each region in the image. We offer a tractable optimization method to solve the proposed discrete tomography problem. We show that our method outperforms the prior work in terms of both reconstruction quality and classification accuracy

    Automated Analysis of Biomedical Data from Low to High Resolution

    Get PDF
    Recent developments of experimental techniques and instrumentation allow life scientists to acquire enormous volumes of data at unprecedented resolution. While this new data brings much deeper insight into cellular processes, it renders manual analysis infeasible and calls for the development of new, automated analysis procedures. This thesis describes how methods of pattern recognition can be used to automate three popular data analysis protocols: Chapter 1 proposes a method to automatically locate bimodal isotope distribution patterns in Hydrogen Deuterium Exchange Mass Spectrometry experiments. The method is based on L1-regularized linear regression and allows for easy quantitative analysis of co-populations with different exchange behavior. The sensitivity of the method is tested on a set of manually identified peptides, while its applicability to exploratory data analysis is validated by targeted follow-up peptide identification. Chapter 2 develops a technique to automate peptide quantification for mass spectrometry experiments, based on 16O/18O labeling of peptides. Two different spectrum segmentation algorithms are proposed: one based on image processing and applicable to low resolution data and one exploiting the sparsity of high resolution data. The quantification accuracy is validated on calibration datasets, produced by mixing a set of proteins in pre-defined ratios. Chapter 3 provides a method for automated detection and segmentation of synapses in electron microscopy images of neural tissue. For images acquired by scanning electron microscopy with nearly isotropic resolution, the algorithm is based on geometric features computed in 3D pixel neighborhoods. For transmission electron microscopy images with poor z-resolution, the algorithm uses additional regularization by performing several rounds of pixel classification with features computed on the probability maps of the previous classification round. The validation is performed by comparing the set of synapses detected by the algorithm against a gold standard detection by human experts. For data with nearly isotropic resolution, the algorithm performance is comparable to that of the human experts

    Identifying Humans by the Shape of Their Heartbeats and Materials by Their X-Ray Scattering Profiles

    Get PDF
    Security needs at access control points presents itself in the form of human identification and/or material identification. The field of Biometrics deals with the problem of identifying individuals based on the signal measured from them. One approach to material identification involves matching their x-ray scattering profiles with a database of known materials. Classical biometric traits such as fingerprints, facial images, speech, iris and retinal scans are plagued by potential circumvention they could be copied and later used by an impostor. To address this problem, other bodily traits such as the electrical signal acquired from the brain (electroencephalogram) or the heart (electrocardiogram) and the mechanical signals acquired from the heart (heart sound, laser Doppler vibrometry measures of the carotid pulse) have been investigated. These signals depend on the physiology of the body, and require the individual to be alive and present during acquisition, potentially overcoming circumvention. We investigate the use of the electrocardiogram (ECG) and carotid laser Doppler vibrometry (LDV) signal, both individually and in unison, for biometric identity recognition. A parametric modeling approach to system design is employed, where the system parameters are estimated from training data. The estimated model is then validated using testing data. A typical identity recognition system can operate in either the authentication (verification) or identification mode. The performance of the biometric identity recognition systems is evaluated using receiver operating characteristic (ROC) or detection error tradeoff (DET) curves, in the authentication mode, and cumulative match characteristic (CMC) curves, in the identification mode. The performance of the ECG- and LDV-based identity recognition systems is comparable, but is worse than those of classical biometric systems. Authentication performance below 1% equal error rate (EER) can be attained when the training and testing data are obtained from a single measurement session. When the training and testing data are obtained from different measurement sessions, allowing for a potential short-term or long-term change in the physiology, the authentication EER performance degrades to about 6 to 7%. Leveraging both the electrical (ECG) and mechanical (LDV) aspects of the heart, we obtain a performance gain of over 50%, relative to each individual ECG-based or LDV-based identity recognition system, bringing us closer to the performance of classical biometrics, with the added advantage of anti-circumvention. We consider the problem of designing combined x-ray attenuation and scatter systems and the algorithms to reconstruct images from the systems. As is the case within a computational imaging framework, we tackle the problem by taking a joint system and algorithm design approach. Accurate modeling of the attenuation of incident and scattered photons within a scatter imaging setup will ultimately lead to more accurate estimates of the scatter densities of an illuminated object. Such scattering densities can then be used in material classification. In x-ray scatter imaging, tomographic measurements of the forward scatter distribution are used to infer scatter densities within a volume. A mask placed between the object and the detector array provides information about scatter angles. An efficient computational implementation of the forward and backward model facilitates iterative algorithms based upon a Poisson log-likelihood. The design of the scatter imaging system influences the algorithmic choices we make. In turn, the need for efficient algorithms guides the system design. We begin by analyzing an x-ray scatter system fitted with a fanbeam source distribution and flat-panel energy-integrating detectors. Efficient algorithms for reconstructing object scatter densities from scatter measurements made on this system are developed. Building on the fanbeam source, energy-integrating at-panel detection model, we develop a pencil beam model and an energy-sensitive detection model. The scatter forward models and reconstruction algorithms are validated on simulated, Monte Carlo, and real data. We describe a prototype x-ray attenuation scanner, co-registered with the scatter system, which was built to provide complementary attenuation information to the scatter reconstruction and present results of applying alternating minimization reconstruction algorithms on measurements from the scanner

    Biosensors: 10th Anniversary Feature Papers

    Get PDF
    Biosensors are analytical devices used for the detection of a chemical substance, or analyte, which combines a biological component with a physicochemical detector. Detection and quantification are based on the measurement of the biological interactions. The biological element of a biosensor may consist of tissues, microorganisms, organelles, cell receptors, enzymes, antibodies and nucleic acids. These devices have been shown to have a wide range of applications in a vast array of fields of research, including environmental monitoring, food analysis, drug detection and health and clinical assessment, and even security and safety. The current Special Issue, “Biosensors: 10th Anniversary Feature Papers”, addresses the existing knowledge gaps and aids the advancement of biosensing applications, in the form of six peer-reviewed research and review papers, detailing the most recent and innovative developments of biosensors

    Ultrasound Tomography for control of Batch Crystallization

    Get PDF

    Compton Imaging Algorithms for Position-Sensitive Gamma-Ray Detectors in the Presence of Motion.

    Full text link
    Position-sensitive gamma-ray spectrometers, like pixelated CdZnTe detectors, record the energies and 3-D positions of gamma-ray interactions. This information allows one to perform Compton-image reconstructions which can determine the source direction in the 4-pi space surrounding the detector. This work describes several image-reconstruction algorithms for use in scenarios where motion is involved. In the absence of motion, standard simple back projection and maximum-likelihood image-reconstruction techniques are appropriate. When the source is close to the detector or the detector is moving relative to the source, 3-D image reconstruction is possible. This reconstruction allows one to localize the source position in 3-D rather than just the source direction. The 3-D reconstruction method using a moving detector is demonstrated by mapping radiation sources in a room and provides a convenient way to view the results overlaid on an optical image. In contrast, when the source is moving relative to the detector, images produced by standard image reconstructions are blurred by the source motion. If the source motion is known, the reconstruction can account for the motion by adjusting the reference frame of the reconstruction to keep the source in the center of the field of view. This type of compensation works well when a single source is present, but it cannot simultaneously reconstruct multiple sources. Thus, this work introduces a new binning structure that allows multiple moving or stationary sources to be reconstructed with minimal blur related to source motion. This method is demonstrated using experimental data of multiple moving and stationary sources. Finally, when the source motion is unknown or there are too many sources to track, the time domain must be included to reconstruct radiation movies instead of radiation images. Simply adding time binning works well when the source is strong and there are many counts reconstructed in each time bin. However, when there are only a few counts per time bin, the maximum-likelihood reconstruction method is modified to enforce smoothness in the time domain which produces much improved results over the standard maximum-likelihood reconstruction. This method is also validated with experimental data.PHDNuclear Engineering & Radiological SciencesUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/98034/1/jasonjaw_1.pd

    Chemometric tools for automated method-development and data interpretation in liquid chromatography

    Get PDF
    The thesis explores the challenges and advancements in the field of liquid chromatography (LC), particularly focusing on complex sample analysis using high-resolution mass spectrometry (MS) and two-dimensional (2D) LC techniques. The research addresses the need for efficient optimization and data-handling strategies in modern LC practice. The thesis is divided into several chapters, each addressing specific aspects of LC and polymer analysis. Chapter 2 provides an overview of the need for chemometric tools in LC practice, discussing methods for processing and analyzing data from 1D and 2D-LC systems and how chemometrics can be utilized for method development and optimization. Chapter 3 introduces a novel approach for interpreting the molecular-weight distribution and intrinsic viscosity of polymers, allowing quantitative analysis of polymer properties without prior knowledge of their interactions. This method correlates the curvature parameter of the Mark-Houwink plot with the polymer's structural and chemical properties. Chapters 4 and 5 focus on the analysis of cellulose ethers (CEs), essential in various industrial applications. A new method is presented for mapping the substitution degree and composition of CE samples, providing detailed compositional distributions. Another method involves a comprehensive 2D LC-MS/MS approach for analyzing hydroxypropyl methyl cellulose (HPMC) monomers, revealing subtle differences in composition between industrial HPMC samples. Chapter 6 introduces AutoLC, an algorithm for automated and interpretive development of 1D-LC separations. It uses retention modeling and Bayesian optimization to achieve optimal separation within a few iterations, significantly improving the efficiency of gradient LC separations. Chapter 7 focuses on the development of an open-source algorithm for automated method development in 2D-LC-MS systems. This algorithm improves separation performance by refining gradient profiles and accurately predicting peak widths, enhancing the reliability of complex gradient LC separations. Chapter 8 addresses the challenge of gradient deformation in LC instruments. An algorithm based on the stable function corrects instrument-specific gradient deformations, enabling accurate determination of analyte retention parameters and improving data comparability between different sources. Chapter 9 introduces a novel approach using capacitively-coupled-contactless-conductivity detection (C4D) to measure gradient profiles without adding tracer components. This method enhances inter-system transferability of retention models for polymers, overcoming the limitations of UV-absorbance detectable tracer components. Chapter 10 discusses practical choices and challenges faced in the thesis chapters, highlighting the need for well-defined standard samples in industrial polymer analysis and emphasizing the importance of generalized problem-solving approaches. The thesis identifies future research directions, emphasizing the importance of computational-assisted methods for polymer analysis, the utilization of online reaction modulation techniques, and exploring continuous distributions obtained through size-exclusion chromatography (SEC) in conjunction with triple detection. Chemometric tools are recognized as essential for gaining deeper insights into polymer chemistry and improving data interpretation in the field of LC

    Providing Information by Resource- Constrained Data Analysis

    Get PDF
    The Collaborative Research Center SFB 876 (Providing Information by Resource-Constrained Data Analysis) brings together the research fields of data analysis (Data Mining, Knowledge Discovery in Data Bases, Machine Learning, Statistics) and embedded systems and enhances their methods such that information from distributed, dynamic masses of data becomes available anytime and anywhere. The research center approaches these problems with new algorithms respecting the resource constraints in the different scenarios. This Technical Report presents the work of the members of the integrated graduate school

    Analisis orientado a objetos de imágenes de teledetección para cartografia forestal : bases conceptuales y un metodo de segmentacion para obtener una particion inicial para la clasificacion = Object-oriented analysis of remote sensing images for land cover mapping : Conceptual foundations and a segmentation method to derive a baseline partition for classification

    Full text link
    El enfoque comúnmente usado para analizar las imágenes de satélite con fines cartográficos da lugar a resultados insatisfactorios debido principalmente a que únicamente utiliza los patrones espectrales de los píxeles, ignorando casi por completo la estructura espacial de la imagen. Además, la equiparación de las clases de cubierta a tipos de materiales homogéneos permite que cualquier parte arbitrariamente delimitada dentro de una tesela del mapa siga siendo un referente del concepto definido por su etiqueta. Esta posibilidad es incongruente con el modelo jerárquico del paisaje cada vez más aceptado en Ecología del Paisaje, que asume que la homogeneidad depende de la escala de observación y en cualquier caso es más semántica que biofísica, y que por tanto los paisajes son intrínsecamente heterogéneos y están compuestos de unidades (patches) que funcionan simultáneamente como un todo diferente de lo que les rodea y como partes de un todo mayor. Por tanto se hace necesario un nuevo enfoque (orientado a objetos) que sea compatible con este modelo y en el que las unidades básicas del análisis sean delimitadas de acuerdo a la variación espacial del fenómeno estudiado. Esta tesis pretende contribuir a este cambio de paradigma en teledetección, y sus objetivos concretos son: 1.- Poner de relieve las deficiencias del enfoque tradicionalmente empleado en la clasificación de imágenes de satélite. 2.- Sentar las bases conceptuales de un enfoque alternativo basado en zonas básicas clasificables como objetos. 3.- Desarrollar e implementar una versión demostrativa de un método automático que convierte una imagen multiespectral en una capa vectorial formada por esas zonas. La estrategia que se propone es producir, basándose en la estructura espacial de las imágenes, una partición de estas en la que cada región puede considerarse relativamente homogénea y diferente de sus vecinas y que además supera (aunque no por mucho) el tamaño de la unidad mínima cartografiable. Cada región se asume corresponde a un rodal que tras la clasificación será agregado junto a otros rodales vecinos en una región mayor que en conjunto pueda verse como una instancia de un cierto tipo de objetos que más tarde son representados en el mapa mediante teselas de una clase particular
    corecore