293 research outputs found

    Sparse approximations of protein structure from noisy random projections

    Full text link
    Single-particle electron microscopy is a modern technique that biophysicists employ to learn the structure of proteins. It yields data that consist of noisy random projections of the protein structure in random directions, with the added complication that the projection angles cannot be observed. In order to reconstruct a three-dimensional model, the projection directions need to be estimated by use of an ad-hoc starting estimate of the unknown particle. In this paper we propose a methodology that does not rely on knowledge of the projection angles, to construct an objective data-dependent low-resolution approximation of the unknown structure that can serve as such a starting estimate. The approach assumes that the protein admits a suitable sparse representation, and employs discrete L1L^1-regularization (LASSO) as well as notions from shape theory to tackle the peculiar challenges involved in the associated inverse problem. We illustrate the approach by application to the reconstruction of an E. coli protein component called the Klenow fragment.Comment: Published in at http://dx.doi.org/10.1214/11-AOAS479 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Precision in 3-D Points Reconstructed From Stereo

    Get PDF
    We characterize the precision of a 3-D reconstruction from stereo: we derive confidence intervals for the components (X,Y,Z) of the reconstructed 3-D points. The precision assessment can be used in data rejection, data reduction, and data fusion of the 3-D points. Also, based on the confidence intervals a bad/failing stereo camera pair can be detected, and discarded from a polynocular stereo system. Experimentally, we have evaluated the performance of the confidence intervals for Z in terms of empirical capture frequencies vs. theoretical probability of capture for a test, ground truth, scene. We have tested the interval estimation procedure on more complex scenes (for example, human faces), but since we do not have ground truth models, we have evaluated the performance in such cases only quantitatively. Currently we are developing ground truth models for more complex (such as general indoor) scenes, and will evaluate quantitatively the performance of the confidence intervals for the depth of the reconstructed points in the automatic rejection of 3-D points which have high degree of uncertainty

    Latest developments in the improvement and quantification of high resolution X-ray tomography data

    Get PDF
    X-ray Computed Tomography (CT) is a powerful tool to visualize the internal structure of objects. Although X-ray CT is often used for medical purposes, it has many applications in the academic and industrial world. X-ray CT is a non destructive tool which provides the possibility to obtain a three dimensional (3D) representation of the investigated object. The currently available high resolution systems can achieve resolutions of less than one micrometer which makes it a valuable technique for various scientific and industrial applications. At the Centre for X-ray Tomography of the Ghent University (UGCT) research is performed on the improvement and application of high resolution X-ray CT (µCT). Important aspects of this research are the development of state of the art high resolution CT scanners and the development of software for controlling the scanners, reconstruction software and analysis software. UGCT works closely together with researchers from various research fields and each of them have their specific requirements. To obtain the best possible results in any particular case, the scanners are developed in a modular way, which allows for optimizations, modifications and improvements during use. Another way of improving the image quality lies in optimization of the reconstruction software, which is why the software package Octopus was developed in house. Once a scanned volume is reconstructed, an important challenge lies in the interpretation of the obtained data. For this interpretation visualization alone is often insufficient and quantitative information is needed. As researchers from different fields have different needs with respect to quantification of their data, UGCT developed the 3D software analysis package Morpho+ for analysing all kinds of samples. The research presented in this work focuses on improving the accuracy and extending the amount of the quantitative information which can be extracted from µCT data. Even if a perfect analysis algorithm would exist, it would be impossible to accurately quantify data of which the image quality is insufficient. As image quality can significantly be improved with the aid of adequate reconstruction techniques, the research presented in this work focuses on analysis as well as reconstruction software. As the datasets obtained with µCT at UGCT are of substantial size, the possibility to process large datasets in a limited amount of time is crucial in the development of new algorithms. The contributions of the author can be subdivided in three major aspects of the processing of CT data: The modification of iterative reconstruction algorithms, the extension and optimization of 3D analysis algorithms and the development of a new algorithm for discrete tomography. These topics are discussed in more detail below. A main aspect in the improvement of image quality is the reduction of artefacts which often occur in µCT such as noise-, cone beam- and beam hardening artefacts. Cone beam artefacts are a result of the cone beam geometry which is often used in laboratory based µCT and beam hardening is a consequence of the polychromaticity of the beam. Although analytical reconstruction algorithms based on filtered back projection are still most commonly used for the reconstruction of µCT datasets, there is another approach which is becoming a valuable alternative: iterative reconstruction algorithms. Iterative algorithms are inherently better at coping with the previously mentioned artefacts. Additionally iterative algorithms can improve image quality in case the number of available projections or the angular range is limited. In chapter 3 the possibility to modify these algorithms to further improve image quality is investigated. It is illustrated that streak artefacts which can occur when metals are present in a sample can be significantly reduced by modifying the reconstruction algorithm. Additionally, it is demonstrated that the incorporation of an initial solution (if available) allows reducing the required number of projections for a second slightly modified sample. To reduce beam hardening artefacts, the physics of the process is modelled and incorporated in the iterative reconstruction algorithm, which results in an easy to use and efficient algorithm for the reduction of beam hardening artefacts and requires no prior knowledge about the sample. In chapter 4 the 3D analysis process is described. In the scope of this work, algorithms of the 3D-analysis software package Morpho+ were optimized and new methods were added to the program, focusing on quantifying connectivity and shape of the phases and elements in the sample, as well as obtaining accurate segmentation, which is essential step in the analysis process is the segmentation of the reconstructed sample. Evidently, the different phases in the sample need to be separated from one another. However, often a second segmentation step is needed in order to separate the different elements present in a volume, such as pores in a pore network, or to separate elements which are physically separated but appear to be connected on the reconstructed images to limited resolution and/or limited contrast of the scan. The latter effect often occurs in the process of identifying different grains in a geological sample. Algorithms which are available for this second segmentation step often result in over-segmentation, i.e. elements are not only separated from one another but also separations inside a single element occur. To overcome this effect an algorithm is presented to semi-automically rejoin the separated parts of a single element. Additionally, Morpho+ was extended with tools to extract information about the connectivity of a sample, which is difficult to quantify but important for samples from various research fields. The connectivity can be described with the aid of the calculation of the Euler Number and tortuosity. Moreover, the number of neighbouring objects of each object can be determined and the connections between objects can be quantified. It is now also possible to extract a skeleton, which describes the basic structure of the volume. A calculation of several shape parameters was added to the program as well, resulting in the possibility to visualize the different objects on a disc-rod diagram. The many possibilities to characterize reconstructed samples with the aid of Morpho+ are illustrated on several applications. As mentioned in the previous section, an important aspect for correctly quantifying µCT data is the correct segmentation of the different phases present in the sample. Often it is the case that a sample consists of only one or a limited number of materials (and surrounding air). In this case this prior knowledge about the sample can be incorporated in the reconstruction algorithm. These kind of algorithms are referred to as discrete reconstruction algorithms, which are used when only a limited number of projections is available. Chapter 5 deals with discrete reconstruction algorithms. One of these algorithms is the Discrete Algebraic Reconstruction Technique, which combines iterative with discrete reconstruction and has shown excellent results. DART requires knowledge about the attenuation coefficient(s) and segmentation threshold(s) of the material(s). For µCT applications (resulting in large datasets) reconstruction times can significantly increase when DART is used in comparison with standard iterative reconstruction, as DART requires more iterations. This complicates the practical applicability of DART for routine applications at UGCT. Therefore a modified algorithm (based on the DART algorithm) for reconstruction of samples consisting out of only one material and surrounding air was developed in the scope of this work, which is referred to as the Experimental Discrete Algebraic Reconstruction Technique (EDART). The goal of this algorithm is to obtain better reconstruction results in comparison with standard iterative reconstruction algorithms, without significantly increasing reconstruction time. Moreover, a fast and intuitive technique to estimate the attenuation coefficient and threshold was developed as a part of the EDART algorithm. In chapter 5 it is illustrated that EDART provides improved image quality for both phantom and real data, in comparison with standard iterative reconstruction algorithms, when only a limited number of projections is available. The algorithms presented in this work can be subsequently applied but can also be combined with one another. It is for example illustrated in chapter 5 that the beam hardening correction method can also be incorporated in the EDART algorithm. The combination of the introduced methods allows for an improvement in the process of extracting accurate quantitative information from µCT data

    Schwinger's Picture of Quantum Mechanics IV: Composition and independence

    Full text link
    The groupoids description of Schwinger's picture of quantum mechanics is continued by discussing the closely related notions of composition of systems, subsystems, and their independence. Physical subsystems have a neat algebraic description as subgroupoids of the Schwinger's groupoid of the system. The groupoids picture offers two natural notions of composition of systems: Direct and free products of groupoids, that will be analyzed in depth as well as their universal character. Finally, the notion of independence of subsystems will be reviewed, finding that the usual notion of independence, as well as the notion of free independence, find a natural realm in the groupoids formalism. The ideas described in this paper will be illustrated by using the EPRB experiment. It will be observed that, in addition to the notion of the non-separability provided by the entangled state of the system, there is an intrinsic `non-separability' associated to the impossibility of identifying the entangled particles as subsystems of the total system.Comment: 32 pages. Comments are welcome

    X-ray computed tomography

    Get PDF
    X-ray computed tomography (CT) can reveal the internal details of objects in three dimensions non-destructively. In this Primer, we outline the basic principles of CT and describe the ways in which a CT scan can be acquired using X-ray tubes and synchrotron sources, including the different possible contrast modes that can be exploited. We explain the process of computationally reconstructing three-dimensional (3D) images from 2D radiographs and how to segment the 3D images for subsequent visualization and quantification. Whereas CT is widely used in medical and heavy industrial contexts at relatively low resolutions, here we focus on the application of higher resolution X-ray CT across science and engineering. We consider the application of X-ray CT to study subjects across the materials, metrology and manufacturing, engineering, food, biological, geological and palaeontological sciences. We examine how CT can be used to follow the structural evolution of materials in three dimensions in real time or in a time-lapse manner, for example to follow materials manufacturing or the in-service behaviour and degradation of manufactured components. Finally, we consider the potential for radiation damage and common sources of imaging artefacts, discuss reproducibility issues and consider future advances and opportunities

    Quantum metrology using tailored non-classical states

    Get PDF
    Squeezed states of light play a significant role in various technologies ranging from high-precision metrology such as gravitational wave detection to quantum information.These quantum states are prepared to carry particular characteristics depending on their application. For instance, some applications require squeezing in one, others only in the combination of two distinct optical modes. Furthermore, squeezing can appear constant for all frequencies or frequency-dependently. In this thesis, novel quantum optical methods employing different, tailored non-classical light sources, are developed and described. The individual squeezed states are controlled and characterised, each tailored for a particular application. In high-precision spectroscopy, the measurement sensitivity is often limited by technical noise at low frequencies. The first publication shows that small phase signals at low-frequency are resolvable without increasing the laser power. We use a phase-modulated field, shifting the signal to high frequencies where technical noise is circumvented. In addition, the field is squeezed by 6 dB at high frequencies to reduce shot noise arising from quantum fluctuations. Our approach resolves sub-shot-noise signals at 100 Hz and 20 kHz on a reduced noise floor. In opto-mechanical sensors such as gravitational wave detectors, the fundamental measurement limitation arises from the combination of shot noise and quantum back-action noise induced by quantum radiation pressure noise. A conventional fixed-quadrature squeezed state generated by a resonant optical parametric oscillator (OPO) can only fight one of these two contributions simultaneously. To cancel both quantum noise contributions, a particularly frequency-dependent squeezed state is required. Our second publication shows that a detuned OPO generates frequency-dependent squeezing. It can be used as an approximate effective-negative mass oscillator in an all-optical coherent quantum noise cancellation scheme and is suitable to coherently cancel quantum noise. Our generated state, which is reconstructed by quantum tomography, rotating over megahertz frequencies, exhibits a rotation angle of 39° and a maximal squeezing degree of 5.5 dB. Two-mode squeezed quantum states are resources required in modern applications such as quantum information processing. In the third publication, we address the challenge of determining the ten independent entries of a two-mode squeezed state’s covariance matrix to fully characterise the quantum state. We demonstrate a full reconstruction of a 7 dB two-mode squeezed state using only a single polarisation-sensitive homodyne detector, which avoids additional optics and potential loss channels. The findings of this thesis are relevant for experiments in high-precision quantum metrology, e.g. in spectroscopy or gravitational wave detectors operating at the standard quantum limit. The insights gained on the generating and handling non-classical states enable advances in quantum information technology.Gequetschtes Licht spielt eine wichtige Rolle für Gravitationswellendetektoren oder Anwendungen in der Quanteninformationstechnologie. Diese Quantenzustände werden je nach Anwendung speziell präpariert. Für einige Anwendungen ist beispielsweise die Quetschung in einer, für andere nur in der Kombination zweier verschiedener optischer Moden erforderlich. Außerdem kann die Quetschung für alle Frequenzen konstant oder frequenzabhängig auftreten. Im Rahmen dieser Arbeit werden neuartige quantenoptische Methoden entwickelt, die unterschiedlich angepasste nicht-klassische Lichtquellen verwenden. Die einzelnen gequetschten Zustände werden anwendungsbezogen erzeugt, stabilisiert und charakterisiert. In der Spektroskopie ist die Messempfindlichkeit oft durch technisches Rauschen bei niedrigen Frequenzen limitiert. Die erste Publikation zeigt die Messung von kleinen, niederfrequenten Phasensignalen, ohne die Leistung des Lasers zu erhöhen. Unser phasenmoduliertes Lichtfeld verschiebt das Signal zu hohen Messfrequenzen und umgeht daher technisches Rauschen. Weil wir zusätzlich mit gequetschtem Licht arbeiten, kann dort auch Quantenrauschen um 6 dB verringert werden. Unsere Messmethode zeigt die Detektion von Signalen, die bei 100 Hz und 20 kHz oszillieren. Die Messgenauigkeit von optomechanischen Sensoren wie zum Gravitationswellendetektoren ist fundamental begrenzt durch eine Kombination aus quantenmechanischem Schrot- und Strahlungsdruckrauschen. Ein Zustand mit konstanter Quetschquadratur, der von einem resonanten optisch parametrischen Oszillator (OPO) erzeugt wird, wirkt nur gegen einen dieser beiden Rauschbeiträge. Um beide Beiträge zu unterdrücken, ist ein besonderer frequenzabhängiger gequetschter Zustand erforderlich. Unsere zweite Publikation zeigt, dass ein von der Resonanzfrequenz verstimmter OPO frequenzabhängiges gequetschtes Licht erzeugt. Er kann annähernd als effektiver negativer Massen-Oszillator verwendet werden, um Quantenrauschen kohärent zu unterdrücken. Der von uns erzeugte Zustand, der durch Quantentomographie rekonstruiert wird und über Megahertz-Frequenzen rotiert, weist einen Rotationswinkel von 39° und eine maximale Quetschung von 5.5 dB auf. Gequetschte Quantenzustände mit zwei Moden werden für moderne Anwendungen wie die Quanteninformationstechnologie benötigt. In der dritten Publikation befassen wir uns mit der Aufgabe, die zehn unabhängigen Einträge der Kovarianzmatrix eines um 7 dB gequetschten Zweimodenzustands zu bestimmen. Damit ist der Quantenzustand vollständig charakterisiert. Wir zeigen eine vollständige Rekonstruktion eines zweimodigen gequetschten Zustands unter Verwendung eines einzigen olarisationsempfindlichen Homodyn-Detektors, der zusätzliche Optiken und potenzielle Verlustkanäle vermeidet. Die Erkenntnisse dieser Arbeit sind relevant für Experimente in der Quantenmetrologie, z.B. in der Spektroskopie oder bei Gravitationswellendetektoren, die mit Sensitivitäten am Standardquantenlimit arbeiten. Die gewonnenen Erkenntnisse über die Erzeugung und Handhabung nicht-klassischer Zustände ermöglichen Fortschritte in der uanteninformationstechnologie

    Wave tomography

    Get PDF

    Topics in exact precision mathematical programming

    Get PDF
    The focus of this dissertation is the advancement of theory and computation related to exact precision mathematical programming. Optimization software based on floating-point arithmetic can return suboptimal or incorrect resulting because of round-off errors or the use of numerical tolerances. Exact or correct results are necessary for some applications. Implementing software entirely in rational arithmetic can be prohibitively slow. A viable alternative is the use of hybrid methods that use fast numerical computation to obtain approximate results that are then verified or corrected with safe or exact computation. We study fast methods for sparse exact rational linear algebra, which arises as a bottleneck when solving linear programming problems exactly. Output sensitive methods for exact linear algebra are studied. Finally, a new method for computing valid linear programming bounds is introduced and proven effective as a subroutine for solving mixed-integer linear programming problems exactly. Extensive computational results are presented for each topic.Ph.D.Committee Chair: Dr. William J. Cook; Committee Member: Dr. George Nemhauser; Committee Member: Dr. Robin Thomas; Committee Member: Dr. Santanu Dey; Committee Member: Dr. Shabbir Ahmed; Committee Member: Dr. Zonghao G
    corecore