3,039 research outputs found

    3D particle tracking velocimetry using dynamic discrete tomography

    Get PDF
    Particle tracking velocimetry in 3D is becoming an increasingly important imaging tool in the study of fluid dynamics, combustion as well as plasmas. We introduce a dynamic discrete tomography algorithm for reconstructing particle trajectories from projections. The algorithm is efficient for data from two projection directions and exact in the sense that it finds a solution consistent with the experimental data. Non-uniqueness of solutions can be detected and solutions can be tracked individually

    A diszkrĂ©t tomogrĂĄfia Ășj irĂĄnyzatai Ă©s alkalmazĂĄsa a neutron radiogrĂĄfiĂĄban = New directions in discrete tomography and its application in neutron radiography

    Get PDF
    A projekt sorĂĄn alapvetƑen a diszkrĂ©t tomogrĂĄfia alĂĄbbi terĂŒletein vĂ©geztĂŒk eredmĂ©nyes kutatĂĄsokat: rekonstrukcĂł legyezƑnyalĂĄb-vetĂŒletekbƑl; geometriai tulajdonsĂĄgokon alapulĂł rekonsrukciĂłs Ă©s egyĂ©rtelmƱsĂ©gi eredmĂ©nyek kiterjeszthetƑsĂ©gĂ©nek vizsgĂĄlata; Ășjfajta geometriai jellemzƑk bevezetĂ©se; egzisztenica, unicitĂĄs Ă©s rekonstrukciĂł vizsgĂĄlata abszorpciĂłs vetĂŒletek esetĂ©n; 2D Ă©s 3D rekonstrukciĂłs algoritmusok fejlesztĂ©se neutron tomogrĂĄfiĂĄs alkalmazĂĄsokhoz; binĂĄris rekonstrukciĂłs algoritmusok tesztelĂ©se, benchmark halmazok Ă©s kiĂ©rtĂ©kelĂ©sek; a rekonstruĂĄlandĂł kĂ©p geometriai Ă©s egyĂ©b strukturĂĄlis informĂĄciĂłinak kinyerĂ©se közvetlenĂŒl a vetĂŒletekbƑl. A kidolgozott eljĂĄrĂĄsaink egy rĂ©szĂ©t az ĂĄltalunk fejlesztett DIRECT elnevezĂ©sƱ diszkrĂ©t tomogrĂĄfiai keretrendszerben implementĂĄltuk, Ă­gy lehetƑsĂ©g nyĂ­lt az ismertetett eljĂĄrĂĄsok tesztelĂ©sĂ©re Ă©s a kĂŒlönbözƑ megközelĂ­tĂ©sek hatĂ©konysĂĄgĂĄnak összevetĂ©sĂ©re is. KutatĂĄsi eredmĂ©nyeinket több, mint 40 nemzetközi tudomĂĄnyos közlemĂ©nyben jelentettĂŒk meg, a projekt futamideje alatt kĂ©t rĂ©sztvevƑ kutatĂł is doktori fokozatot szerzett a kutatĂĄsi tĂ©mĂĄbĂłl. A projekt sorĂĄn több olyan kutatĂĄsi irĂĄnyvonalat fedtĂŒnk fel, ahol elkĂ©pzelĂ©seink szerint tovĂĄbbi jelentƑs elmĂ©leti eredmĂ©nyeket lehet elĂ©rni, Ă©s ezzel egyidƑben a gyakorlat szĂĄmĂĄra is Ășj jellegƱ Ă©s hatĂ©konyabb diszkrĂ©t kĂ©palkotĂł eljĂĄrĂĄsok tervezhetƑk Ă©s kivitelezhetƑk. | In the project entitled ""New Directions in Discrete Tomography and Its Applications in Neutron Radiography"" we did successful research mainly on the following topics on Discrete Tomography (DT): reconstruction from fan-beam projections; extension of uniqueness and reconstruction results of DT based on geometrical priors, introduction of new geometrical properties to facilitate the reconstruction; uniqueness and reconstruction in case of absorbed projections; 2D and 3D reconstruction algorithms for applications in neutron tomography; testing binary reconstruction algorithms, developing benchmark sets and evaluations; exploiting structural features of images from their projections. As a part of the project we implemented some of our reconstruction methods in the DIRECT framework (also developed at our department), thus making it possible to test and compare our algorithms. We published more than 40 articles in international conference proceedings and journals. Two of our project members obtained PhD degree during the period of the project (mostly based on their contributions to the work). We also discovered several research areas where further work can yield important theoretical results as well as more effective discrete reconstruction methods for the applications

    A benchmark set for the reconstruction of hv-convex discrete sets

    Get PDF
    AbstractIn this paper we summarize the most important generation methods developed for the subclasses of hv-convex discrete sets. We also present some new generation techniques to complement the former ones thus making it possible to design a complete benchmark set for testing the performance of reconstruction algorithms on the class of hv-convex discrete sets and its subclasses. By using this benchmark set the paper also collects several statistics on hv-convex discrete sets, which are of great importance in the analysis of algorithms for reconstructing such kinds of discrete sets

    Compressive Mining: Fast and Optimal Data Mining in the Compressed Domain

    Full text link
    Real-world data typically contain repeated and periodic patterns. This suggests that they can be effectively represented and compressed using only a few coefficients of an appropriate basis (e.g., Fourier, Wavelets, etc.). However, distance estimation when the data are represented using different sets of coefficients is still a largely unexplored area. This work studies the optimization problems related to obtaining the \emph{tightest} lower/upper bound on Euclidean distances when each data object is potentially compressed using a different set of orthonormal coefficients. Our technique leads to tighter distance estimates, which translates into more accurate search, learning and mining operations \textit{directly} in the compressed domain. We formulate the problem of estimating lower/upper distance bounds as an optimization problem. We establish the properties of optimal solutions, and leverage the theoretical analysis to develop a fast algorithm to obtain an \emph{exact} solution to the problem. The suggested solution provides the tightest estimation of the L2L_2-norm or the correlation. We show that typical data-analysis operations, such as k-NN search or k-Means clustering, can operate more accurately using the proposed compression and distance reconstruction technique. We compare it with many other prevalent compression and reconstruction techniques, including random projections and PCA-based techniques. We highlight a surprising result, namely that when the data are highly sparse in some basis, our technique may even outperform PCA-based compression. The contributions of this work are generic as our methodology is applicable to any sequential or high-dimensional data as well as to any orthogonal data transformation used for the underlying data compression scheme.Comment: 25 pages, 20 figures, accepted in VLD

    Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?

    Get PDF
    Suppose we are given a vector ff in RN\R^N. How many linear measurements do we need to make about ff to be able to recover ff to within precision Ï”\epsilon in the Euclidean (ℓ2\ell_2) metric? Or more exactly, suppose we are interested in a class F{\cal F} of such objects--discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy Ï”\epsilon? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal f∈Ff \in {\cal F} decay like a power-law (or if the coefficient sequence of ff in a fixed basis decays like a power-law), then it is possible to reconstruct ff to within very high accuracy from a small number of random measurements.Comment: 39 pages; no figures; to appear. Bernoulli ensemble proof has been corrected; other expository and bibliographical changes made, incorporating referee's suggestion

    Phase Retrieval via Matrix Completion

    Full text link
    This paper develops a novel framework for phase retrieval, a problem which arises in X-ray crystallography, diffraction imaging, astronomical imaging and many other applications. Our approach combines multiple structured illuminations together with ideas from convex programming to recover the phase from intensity measurements, typically from the modulus of the diffracted wave. We demonstrate empirically that any complex-valued object can be recovered from the knowledge of the magnitude of just a few diffracted patterns by solving a simple convex optimization problem inspired by the recent literature on matrix completion. More importantly, we also demonstrate that our noise-aware algorithms are stable in the sense that the reconstruction degrades gracefully as the signal-to-noise ratio decreases. Finally, we introduce some theory showing that one can design very simple structured illumination patterns such that three diffracted figures uniquely determine the phase of the object we wish to recover

    The Dantzig selector: Statistical estimation when pp is much larger than nn

    Get PDF
    In many important statistical applications, the number of variables or parameters pp is much larger than the number of observations nn. Suppose then that we have observations y=XÎČ+zy=X\beta+z, where ÎČ∈Rp\beta\in\mathbf{R}^p is a parameter vector of interest, XX is a data matrix with possibly far fewer rows than columns, nâ‰Șpn\ll p, and the ziz_i's are i.i.d. N(0,σ2)N(0,\sigma^2). Is it possible to estimate ÎČ\beta reliably based on the noisy data yy? To estimate ÎČ\beta, we introduce a new estimator--we call it the Dantzig selector--which is a solution to the ℓ1\ell_1-regularization problem \min_{\tilde{\b eta}\in\mathbf{R}^p}\|\tilde{\beta}\|_{\ell_1}\quad subject to\quad \|X^*r\|_{\ell_{\infty}}\leq(1+t^{-1})\sqrt{2\log p}\cdot\sigma, where rr is the residual vector y−XÎČ~y-X\tilde{\beta} and tt is a positive scalar. We show that if XX obeys a uniform uncertainty principle (with unit-normed columns) and if the true parameter vector ÎČ\beta is sufficiently sparse (which here roughly guarantees that the model is identifiable), then with very large probability, ∄ÎČ^−ÎČ∄ℓ22≀C2⋅2log⁥p⋅(σ2+∑imin⁥(ÎČi2,σ2)).\|\hat{\beta}-\beta\|_{\ell_2}^2\le C^2\cdot2\log p\cdot \Biggl(\sigma^2+\sum_i\min(\beta_i^2,\sigma^2)\Biggr). Our results are nonasymptotic and we give values for the constant CC. Even though nn may be much smaller than pp, our estimator achieves a loss within a logarithmic factor of the ideal mean squared error one would achieve with an oracle which would supply perfect information about which coordinates are nonzero, and which were above the noise level. In multivariate regression and from a model selection viewpoint, our result says that it is possible nearly to select the best subset of variables by solving a very simple convex program, which, in fact, can easily be recast as a convenient linear program (LP).Comment: This paper discussed in: [arXiv:0803.3124], [arXiv:0803.3126], [arXiv:0803.3127], [arXiv:0803.3130], [arXiv:0803.3134], [arXiv:0803.3135]. Rejoinder in [arXiv:0803.3136]. Published in at http://dx.doi.org/10.1214/009053606000001523 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • 

    corecore