289 research outputs found

    A diszkrét tomográfia új irányzatai és alkalmazása a neutron radiográfiában = New directions in discrete tomography and its application in neutron radiography

    Get PDF
    A projekt során alapvetően a diszkrét tomográfia alábbi területein végeztük eredményes kutatásokat: rekonstrukcó legyezőnyaláb-vetületekből; geometriai tulajdonságokon alapuló rekonsrukciós és egyértelműségi eredmények kiterjeszthetőségének vizsgálata; újfajta geometriai jellemzők bevezetése; egzisztenica, unicitás és rekonstrukció vizsgálata abszorpciós vetületek esetén; 2D és 3D rekonstrukciós algoritmusok fejlesztése neutron tomográfiás alkalmazásokhoz; bináris rekonstrukciós algoritmusok tesztelése, benchmark halmazok és kiértékelések; a rekonstruálandó kép geometriai és egyéb strukturális információinak kinyerése közvetlenül a vetületekből. A kidolgozott eljárásaink egy részét az általunk fejlesztett DIRECT elnevezésű diszkrét tomográfiai keretrendszerben implementáltuk, így lehetőség nyílt az ismertetett eljárások tesztelésére és a különböző megközelítések hatékonyságának összevetésére is. Kutatási eredményeinket több, mint 40 nemzetközi tudományos közleményben jelentettük meg, a projekt futamideje alatt két résztvevő kutató is doktori fokozatot szerzett a kutatási témából. A projekt során több olyan kutatási irányvonalat fedtünk fel, ahol elképzeléseink szerint további jelentős elméleti eredményeket lehet elérni, és ezzel egyidőben a gyakorlat számára is új jellegű és hatékonyabb diszkrét képalkotó eljárások tervezhetők és kivitelezhetők. | In the project entitled ""New Directions in Discrete Tomography and Its Applications in Neutron Radiography"" we did successful research mainly on the following topics on Discrete Tomography (DT): reconstruction from fan-beam projections; extension of uniqueness and reconstruction results of DT based on geometrical priors, introduction of new geometrical properties to facilitate the reconstruction; uniqueness and reconstruction in case of absorbed projections; 2D and 3D reconstruction algorithms for applications in neutron tomography; testing binary reconstruction algorithms, developing benchmark sets and evaluations; exploiting structural features of images from their projections. As a part of the project we implemented some of our reconstruction methods in the DIRECT framework (also developed at our department), thus making it possible to test and compare our algorithms. We published more than 40 articles in international conference proceedings and journals. Two of our project members obtained PhD degree during the period of the project (mostly based on their contributions to the work). We also discovered several research areas where further work can yield important theoretical results as well as more effective discrete reconstruction methods for the applications

    Reconstruction of Binary Image Using Techniques of Discrete Tomography

    Get PDF
    Discrete tomography deals with the reconstruction of images, in particular binary images, from their projections. A number of binary image reconstruction methods have been considered in the literature, using different projection models or additional constraints. Here, we will consider reconstruction of a binary image with some prescribed numerical information on the rows of the binary image treated as a binary matrix of 0's and 1's. The problem involves information, referred to as row projection, on the number of 1's and the number of subword 01's in the rows of the binary image to be constructed. The algorithm proposed constructs one among the many binary images having the same numerical information on the number of 1's and the number of subword 01. This proposed algorithm will also construct the image uniquely for a special kind of a binary image with its rows in some specific form

    A framework for generating some discrete sets with disjoint components by using uniform distributions

    Get PDF
    AbstractDiscrete tomography deals with the reconstruction of discrete sets from few projections. Assuming that the set to be reconstructed belongs to a certain class of discrete sets with some geometrical properties is a commonly used technique to reduce the number of possibly many different solutions of the same reconstruction problem. The average performance of reconstruction algorithms are often tested on such classes by choosing elements of a given class from uniform random distributions. This paper presents a general framework for generating discrete sets with disjoint connected components using uniform distributions. Especially, the uniform random generation of hv-convex discrete sets and Q-convex discrete sets according to the size of the minimal bounding rectangle are discussed

    Contents

    Get PDF

    Variational Methods for Discrete Tomography

    Get PDF
    Image reconstruction from tomographic sampled data has contoured as a stand alone research area with application in many practical situations, in domains such as medical imaging, seismology, astronomy, flow analysis, industrial inspection and many more. Already existing algorithms on the market (continuous) fail in being able to model the analysed object. In this thesis, we study discrete tomographic approaches that enable the addition of constraints in order to better fit the description of the analysed object and improve the end result. A particular focus is set on assumptions regarding the signals' sampling methodology, point at which we look towards the recently introduced Compressive Sensing (CS) approach, that has shown to return remarkable results based on how sparse a given signal is. However, research done in the CS field does not accurately relate to real world applications, as objects usually surrounding us are considered to be piece-wise constant (not sparse on their own) and the properties of the sensing matrices from the viewpoint of CS do not re ect real acquisition processes. Motivated by these shortcomings, we study signals that are sparse in a given representation, e.g. the forward-difference operator (total variation) and develop reconstruction diagrams (phase transitions) with the help of linear programming, convex analysis and duality that enable the user to pin-point the type of objects (with regard to their sparsity) which can be reconstructed, given an ensemble of acquisition directions. Moreover, a closer look is given to handling large data volumes, by adding different perturbations (entropic, quadratic) to the already constrained linear program. In empirical assessments, perturbation has lead to an increased reconstruction rate. Needless to say, the topic of this thesis is motivated by industrial applications where the acquisition process is restricted to a maximum of nine cameras, thus returning a severely undersampled inverse problem

    MRI Excitation Pulse Design and Image Reconstruction for Accelerated Neuroimaging

    Full text link
    Excitation pulse design and image reconstruction are two important topics in MR research for enabling faster imaging. On the pulse design side, selective excitations that confine signals to be within a small region-of-interest (ROI) instead of the full imaging field-of-view (FOV) can be used to reduce sampling density in the k-space, which is a direct outcome of the change in the underlying Nyquist sampling rate. On the reconstruction side, besides improving imaging algorithms’ ability to restore images from less data, another objective is to reduce the reconstruction time, particularly for dynamic imaging applications. This dissertation focuses on these two perspectives: Chapter II is devoted to the excitation pulse design. Specifically, we exploit auto-differentiation frameworks that automatically apply the chain rule on complicated computations. We derived and developed a computationally efficient Bloch-simulator and its explicit Bloch simulation Jacobian operations using such frameworks. This simulator can yield numerical derivatives with respect to pulse RF and gradient waveforms given arbitrary sub-differentiable excitation objective functions. The method does not rely on the small-tip approximation, and is accurate as long as the Bloch simulation can correctly model the spin movements due to the excitation pulses. In particular, we successfully applied this pulse design approach for jointly designing RF and gradient waveforms for 3D spatially tailored large-tip excitation objectives. The auto-differentiable pulse design method can yield superior 3D spatially tailored excitation profiles that are useful for inner volume (IV) imaging, where one attempts to image a volumetric ROI at high spatiotemporal resolution without aliasing from signals outside the IV (i.e., outer volume). In Chapter III, we propose and develop a novel steady-state IV imaging strategy which suppresses aliasing by saturating the outer volume (OV) magnetizations via a 3D tailored OV excitation pulse that is followed by a signal crusher gradient. This saturation based strategy can substantially suppress the unwanted aliasing for common steady-state imaging sequences. By eliminating the outer volume signals, one can configure acquisitions for a reduced FOV to shorten the scanning time and increase spatiotemporal resolution for applications such as dynamic imaging. In dynamic imaging (e.g., fMRI), where a time series is to be reconstructed, non-iterative reconstruction algorithms may offer savings in overall reconstruction time. Chapter IV focuses on non-iterative image reconstruction, specifically, extending the GRAPPA algorithm to general non-Cartesian acquisitions. We analyzed the formalism of conventional GRAPPA reconstruction coefficients, generalized it to non-Cartesian scenarios by using properties of the Fourier transform, and obtained an efficient non-Cartesian GRAPPA algorithm. The algorithm attains reconstruction quality that can rival classical iterative imaging methods such as conjugate gradient SENSE and SPIRiT. In summary, this dissertation has proposed and developed multiple methods for accelerating MR imaging, from pulse design to reconstruction. While devoted to neuroimaging, the proposed methods are general and should also be useful for other applications.PHDBiomedical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/168085/1/tianrluo_1.pd

    Advances in Spectral Learning with Applications to Text Analysis and Brain Imaging

    Get PDF
    Spectral learning algorithms are becoming increasingly popular in data-rich domains, driven in part by recent advances in large scale randomized SVD, and in spectral estimation of Hidden Markov Models. Extensions of these methods lead to statistical estimation algorithms which are not only fast, scalable, and useful on real data sets, but are also provably correct. Following this line of research, we make two contributions. First, we propose a set of spectral algorithms for text analysis and natural language processing. In particular, we propose fast and scalable spectral algorithms for learning word embeddings -- low dimensional real vectors (called Eigenwords) that capture the “meaning” of words from their context. Second, we show how similar spectral methods can be applied to analyzing brain images. State-of-the-art approaches to learning word embeddings are slow to train or lack theoretical grounding; We propose three spectral algorithms that overcome these limitations. All three algorithms harness the multi-view nature of text data i.e. the left and right context of each word, and share three characteristics: 1). They are fast to train and are scalable. 2). They have strong theoretical properties. 3). They can induce context-specific embeddings i.e. different embedding for “river bank” or “Bank of America”. \end{enumerate} They also have lower sample complexity and hence higher statistical power for rare words. We provide theory which establishes relationships between these algorithms and optimality criteria for the estimates they provide. We also perform thorough qualitative and quantitative evaluation of Eigenwords and demonstrate their superior performance over state-of-the-art approaches. Next, we turn to the task of using spectral learning methods for brain imaging data. Methods like Sparse Principal Component Analysis (SPCA), Non-negative Matrix Factorization (NMF) and Independent Component Analysis (ICA) have been used to obtain state-of-the-art accuracies in a variety of problems in machine learning. However, their usage in brain imaging, though increasing, is limited by the fact that they are used as out-of-the-box techniques and are seldom tailored to the domain specific constraints and knowledge pertaining to medical imaging, which leads to difficulties in interpretation of results. In order to address the above shortcomings, we propose Eigenanatomy (EANAT), a general framework for sparse matrix factorization. Its goal is to statistically learn the boundaries of and connections between brain regions by weighing both the data and prior neuroanatomical knowledge. Although EANAT incorporates some neuroanatomical prior knowledge in the form of connectedness and smoothness constraints, it can still be difficult for clinicians to interpret the results in specific domains where network-specific hypotheses exist. We thus extend EANAT and present a novel framework for prior-constrained sparse decomposition of matrices derived from brain imaging data, called Prior Based Eigenanatomy (p-Eigen). We formulate our solution in terms of a prior-constrained l1 penalized (sparse) principal component analysis. Experimental evaluation confirms that p-Eigen extracts biologically-relevant, patient-specific functional parcels and that it significantly aids classification of Mild Cognitive Impairment when compared to state-of-the-art competing approaches

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page
    • …
    corecore