47 research outputs found

    Investigation of Different Sparsity Transforms for the PICCS Algorithm in Small- Animal Respiratory Gated CT

    Get PDF
    Data Availability Statement: All relevant data are available from the Zenodo database, under the DOI: http://dx.doi.org/10.5281/zenodo.15685.Respiratory gating helps to overcome the problem of breathing motion in cardiothoracic small-animal imaging by acquiring multiple images for each projection angle and then assigning projections to different phases. When this approach is used with a dose similar to that of a static acquisition, a low number of noisy projections are available for the reconstruction of each respiratory phase, thus leading to streak artifacts in the reconstructed images. This problem can be alleviated using a prior image constrained compressed sensing (PICCS) algorithm, which enables accurate reconstruction of highly undersampled data when a prior image is available. We compared variants of the PICCS algorithm with different transforms in the prior penalty function: gradient, unitary, and wavelet transform. In all cases the problem was solved using the Split Bregman approach, which is efficient for convex constrained optimization. The algorithms were evaluated using simulations generated from data previously acquired on a micro-CT scanner following a high-dose protocol (four times the dose of a standard static protocol). The resulting data were used to simulate scenarios with different dose levels and numbers of projections. All compressed sensing methods performed very similarly in terms of noise, spatiotemporal resolution, and streak reduction, and filtered back-projection was greatly improved. Nevertheless, the wavelet domain was found to be less prone to patchy cartoon-like artifacts than the commonly used gradient domain.This work was partially funded by the RICRETIC network (RD12/0042/0057) from the Ministerio de Economía y Competitividad (www.mineco.gob.es/) and projects TEC2010-21619-C04-01 and PI11/00616 from Ministerio de Ciencia e Innovación (www.micinn.es/). The research leading to these results was supported by funding from the Innovative Medicines Initiative (www.imi.europa.eu) Joint Undertaking under grant agreement n°115337, the resources of which comprise financial contributions from the European Union's Seventh Framework Programme (FP7/2007-2013) and EFPIA companies ("in kind contribution"). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript

    Network Flow Algorithms for Discrete Tomography

    Get PDF
    Tomography is a powerful technique to obtain images of the interior of an object in a nondestructive way. First, a series of projection images (e.g., X-ray images) is acquired and subsequently a reconstruction of the interior is computed from the available project data. The algorithms that are used to compute such reconstructions are known as tomographic reconstruction algorithms. Discrete tomography is concerned with the tomographic reconstruction of images that are known to contain only a few different gray levels. By using this knowledge in the reconstruction algorithm it is often possible to reduce the number of projections required to compute an accurate reconstruction, compared to algorithms that do not use prior knowledge. This thesis deals with new reconstruction algorithms for discrete tomography. In particular, the first five chapters are about reconstruction algorithms based on network flow methods. These algorithms make use of an elegant correspondence between certain types of tomography problems and network flow problems from the field of Operations Research. Chapter 6 deals with a problem that occurs in the application of discrete tomography to the reconstruction of nanocrystals from projections obtained by electron microscopy.The research for this thesis has been financially supported by the Netherlands Organisation for Scientific Research (NWO), project 613.000.112.UBL - phd migration 201

    Computer vision and optimization methods applied to the measurements of in-plane deformations

    Get PDF
    fi=vertaisarvioitu|en=peerReviewed

    Information recovery in the biological sciences : protein structure determination by constraint satisfaction, simulation and automated image processing

    Get PDF
    Regardless of the field of study or particular problem, any experimental science always poses the same question: ÒWhat object or phenomena generated the data that we see, given what is known?Ó In the field of 2D electron crystallography, data is collected from a series of two-dimensional images, formed either as a result of diffraction mode imaging or TEM mode real imaging. The resulting dataset is acquired strictly in the Fourier domain as either coupled Amplitudes and Phases (as in TEM mode) or Amplitudes alone (in diffraction mode). In either case, data is received from the microscope in a series of CCD or scanned negatives of images which generally require a significant amount of pre-processing in order to be useful. Traditionally, processing of the large volume of data collected from the microscope was the time limiting factor in protein structure determination by electron microscopy. Data must be initially collected from the microscope either on film-negatives, which in turn must be developed and scanned, or from CCDs of sizes typically no larger than 2096x2096 (though larger models are in operation). In either case, data are finally ready for processing as 8-bit, 16-bit or (in principle) 32-bit grey-scale images. Regardless of data source, the foundation of all crystallographic methods is the presence of a regular Fourier lattice. Two dimensional cryo-electron microscopy of proteins introduces special challenges as multiple crystals may be present in the same image, producing in some cases several independent lattices. Additionally, scanned negatives typically have a rectangular region marking the film number and other details of image acquisition that must be removed prior to processing. If the edges of the images are not down-tapered, vertical and horizontal ÒstreaksÓ will be present in the Fourier transform of the image --arising from the high-resolution discontinuities between the opposite edges of the image. These streaks can overlap with lattice points which fall close to the vertical and horizontal axes and disrupt both the information they contain and the ability to detect them. Lastly, SpotScanning (Downing, 1991) is a commonly used process where-by circular discs are individually scanned in an image. The large-scale regularity of the scanning patter produces a low frequency lattice which can interfere and overlap with any protein crystal lattices. We introduce a series of methods packaged into 2dx (Gipson, et al., 2007) which simultaneously addresses these problems, automatically detecting accurate crystal lattice parameters for a majority of images. Further a template is described for the automation of all subsequent image processing steps on the road to a fully processed dataset. The broader picture of image processing is one of reproducibility. The lattice parameters, for instance, are only one of hundreds of parameters which must be determined or provided and subsequently stored and accessed in a regular way during image processing. Numerous steps, from correct CTF and tilt-geometry determination to the final stages of symmetrization and optimal image recovery must be performed sequentially and repeatedly for hundreds of images. The goal in such a project is then to automatically process as significant a portion of the data as possible and to reduce unnecessary, repetitive data entry by the user. Here also, 2dx (Gipson, et al., 2007), the image processing package designed to automatically process individual 2D TEM images is introduced. This package focuses on reliability, ease of use and automation to produce finished results necessary for full three-dimensional reconstruction of the protein in question. Once individual 2D images have been processed, they contribute to a larger project-wide 3-dimensional dataset. Several challenges exist in processing this dataset, besides simply the organization of results and project-wide parameters. In particular, though tilt-geometry, relative amplitude scaling and absolute orientation are in principle known (or obtainable from an individual image) errors, uncertainties and heterogeneous data-types provide for a 3D-dataset with many parameters to be optimized. 2dx_merge (Gipson, et al., 2007) is the follow-up to the first release of 2dx which had originally processed only individual images. Based on the guiding principles of the earlier release, 2dx_merge focuses on ease of use and automation. The result is a fully qualified 3D structure determination package capable of turning hundreds of electron micrograph images, nearly completely automatically, into a full 3D structure. Most of the processing performed in the 2dx package is based on the excellent suite of programs termed collectively as the MRC package (Crowther, et al., 1996). Extensions to this suite and alternative algorithms continue to play an essential role in image processing as computers become faster and as advancements are made in the mathematics of signal processing. In this capacity, an alternative procedure to generate a 3D structure from processed 2D images is presented. This algorithm, entitled ÒProjective Constraint OptimizationÓ (PCO), leverages prior known information, such as symmetry and the fact that the protein is bound in a membrane, to extend the normal boundaries of resolution. In particular, traditional methods (Agard, 1983) make no attempt to account for the Òmissing coneÓ a vast, un-sampled, region in 3D Fourier space arising from specimen tilt limitations in the microscope. Provided sufficient data, PCO simultaneously refines the dataset, accounting for error, as well as attempting to fill this missing cone. Though PCO provides a near-optimal 3D reconstruction based on data, depending on initial data quality and amount of prior knowledge, there may be a host of solutions, and more importantly pseudo-solutions, which are more-or-less consistent with the provided dataset. Trying to find a global best-fit for known information and data can be a daunting challenge mathematically, to this end the use of meta-heuristics is addressed. Specifically, in the case of many pseudo-solutions, so long as a suitably defined error metric can be found, quasi-evolutionary swarm algorithms can be used that search solution space, sharing data as they go. Given sufficient computational power, such algorithms can dramatically reduce the search time for global optimums for a given dataset. Once the structure of a protein has been determined, many questions often remain about its function. Questions about the dynamics of a protein, for instance, are not often readily interpretable from structure alone. To this end an investigation into computationally optimized structural dynamics is described. Here, in order to find the most likely path a protein might take through Òconformation spaceÓ between two conformations, a graphics processing unit (GPU) optimized program and set of libraries is written to speed of the calculation of this process 30x. The tools and methods developed here serve as a conceptual template as to how GPU coding was applied to other aspects of the work presented here as well as GPU programming generally. The final portion of the thesis takes an apparent step in reverse, presenting a dramatic, yet highly predictive, simplification of a complex biological process. Kinetic Monte Carlo simulations idealize thousands of proteins as interacting agents by a set of simple rules (i.e. react/dissociate), offering highly-accurate insights into the large-scale cooperative behavior of proteins. This work demonstrates that, for many applications, structure, dynamics or even general knowledge of a protein may not be necessary for a meaningful biological story to emerge. Additionally, even in cases where structure and function is known, such simulations can help to answer the biological question in its entirety from structure, to dynamics, to ultimate function

    Robust Motion and Distortion Correction of Diffusion-Weighted MR Images

    Get PDF
    Effective image-based correction of motion and other acquisition artifacts became an essential step in diffusion-weighted Magnetic Resonance Imaging (MRI) analysis as the micro-structural tissue analysis advances towards higher-order models. These come with increasing demands on the number of acquired images and the diffusion strength (b-value) yielding lower signal-to-noise ratios (SNR) and a higher susceptibility to artifacts. These conditions, however, render the current image-based correction schemes, which act retrospectively on the acquired images through pairwise registration, more and more ineffective. Following the hypothesis, that a more consequent exploitation of the different intensity relationships between the volumes would reduce registration outliers, a novel correction scheme based on memetic search is proposed. This scheme allows for incorporating all single image metrics into a multi-objective optimization approach. To allow a quantitative evaluation of registration precision, realistic synthetic data are constructed by extending a diffusion MRI simulation framework by motion and eddy-currents-caused artifacts. The increased robustness and efficacy of the multi-objective registration method is demonstrated on the synthetic as well as in-vivo datasets at different levels of motion and other acquisition artifacts. In contrast to the state-of-the-art methods, the average target registration error (TRE) remained below the single voxel size also at high b-values (3000 s.mm-2) and low signal-to-noise ratio in the moderately artifacted datasets. In the more severely artifacted data, the multi-objective method was able to eliminate most of the registration outliers of the state-of-the-art methods, yielding an average TRE below the double voxel size. In the in-vivo data, the increased precision manifested itself in the scalar measures as well as the fiber orientation derived from the higher-order Neurite Orientation Dispersion and Density Imaging (NODDI) model. For the neuronal fiber tracts reconstructed on the data after correction, the proposed method most closely resembled the ground-truth. The proposed multi-objective method has not only impact on the evaluation of higher-order diffusion models as well as fiber tractography and connectomics, but could also find application to challenging image registration problems in general

    Automated retinal layer segmentation and pre-apoptotic monitoring for three-dimensional optical coherence tomography

    Get PDF
    The aim of this PhD thesis was to develop segmentation algorithm adapted and optimized to retinal OCT data that will provide objective 3D layer thickness which might be used to improve diagnosis and monitoring of retinal pathologies. Additionally, a 3D stack registration method was produced by modifying an existing algorithm. A related project was to develop a pre-apoptotic retinal monitoring based on the changes in texture parameters of the OCT scans in order to enable treatment before the changes become irreversible; apoptosis refers to the programmed cell death that can occur in retinal tissue and lead to blindness. These issues can be critical for the examination of tissues within the central nervous system. A novel statistical model for segmentation has been created and successfully applied to a large data set. A broad range of future research possibilities into advanced pathologies has been created by the results obtained. A separate model has been created for choroid segmentation located deep in retina, as the appearance of choroid is very different from the top retinal layers. Choroid thickness and structure is an important index of various pathologies (diabetes etc.). As part of the pre-apoptotic monitoring project it was shown that an increase in proportion of apoptotic cells in vitro can be accurately quantified. Moreover, the data obtained indicates a similar increase in neuronal scatter in retinal explants following axotomy (removal of retinas from the eye), suggesting that UHR-OCT can be a novel non-invasive technique for the in vivo assessment of neuronal health. Additionally, an independent project within the computer science department in collaboration with the school of psychology has been successfully carried out, improving analysis of facial dynamics and behaviour transfer between individuals. Also, important improvements to a general signal processing algorithm, dynamic time warping (DTW), have been made, allowing potential application in a broad signal processing field.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Forensic identification by craniofacial superimposition using soft computing

    Get PDF

    Automated retinal layer segmentation and pre-apoptotic monitoring for three-dimensional optical coherence tomography

    Get PDF
    The aim of this PhD thesis was to develop segmentation algorithm adapted and optimized to retinal OCT data that will provide objective 3D layer thickness which might be used to improve diagnosis and monitoring of retinal pathologies. Additionally, a 3D stack registration method was produced by modifying an existing algorithm. A related project was to develop a pre-apoptotic retinal monitoring based on the changes in texture parameters of the OCT scans in order to enable treatment before the changes become irreversible; apoptosis refers to the programmed cell death that can occur in retinal tissue and lead to blindness. These issues can be critical for the examination of tissues within the central nervous system. A novel statistical model for segmentation has been created and successfully applied to a large data set. A broad range of future research possibilities into advanced pathologies has been created by the results obtained. A separate model has been created for choroid segmentation located deep in retina, as the appearance of choroid is very different from the top retinal layers. Choroid thickness and structure is an important index of various pathologies (diabetes etc.). As part of the pre-apoptotic monitoring project it was shown that an increase in proportion of apoptotic cells in vitro can be accurately quantified. Moreover, the data obtained indicates a similar increase in neuronal scatter in retinal explants following axotomy (removal of retinas from the eye), suggesting that UHR-OCT can be a novel non-invasive technique for the in vivo assessment of neuronal health. Additionally, an independent project within the computer science department in collaboration with the school of psychology has been successfully carried out, improving analysis of facial dynamics and behaviour transfer between individuals. Also, important improvements to a general signal processing algorithm, dynamic time warping (DTW), have been made, allowing potential application in a broad signal processing field

    Music in Evolution and Evolution in Music

    Get PDF
    Music in Evolution and Evolution in Music by Steven Jan is a comprehensive account of the relationships between evolutionary theory and music. Examining the ‘evolutionary algorithm’ that drives biological and musical-cultural evolution, the book provides a distinctive commentary on how musicality and music can shed light on our understanding of Darwin’s famous theory, and vice-versa. Comprised of seven chapters, with several musical examples, figures and definitions of terms, this original and accessible book is a valuable resource for anyone interested in the relationships between music and evolutionary thought. Jan guides the reader through key evolutionary ideas and the development of human musicality, before exploring cultural evolution, evolutionary ideas in musical scholarship, animal vocalisations, music generated through technology, and the nature of consciousness as an evolutionary phenomenon. A unique examination of how evolutionary thought intersects with music, Music in Evolution and Evolution in Music is essential to our understanding of how and why music arose in our species and why it is such a significant presence in our lives
    corecore