3,039 research outputs found
3D particle tracking velocimetry using dynamic discrete tomography
Particle tracking velocimetry in 3D is becoming an increasingly important
imaging tool in the study of fluid dynamics, combustion as well as plasmas. We
introduce a dynamic discrete tomography algorithm for reconstructing particle
trajectories from projections. The algorithm is efficient for data from two
projection directions and exact in the sense that it finds a solution
consistent with the experimental data. Non-uniqueness of solutions can be
detected and solutions can be tracked individually
A diszkrĂ©t tomogrĂĄfia Ășj irĂĄnyzatai Ă©s alkalmazĂĄsa a neutron radiogrĂĄfiĂĄban = New directions in discrete tomography and its application in neutron radiography
A projekt sorĂĄn alapvetĆen a diszkrĂ©t tomogrĂĄfia alĂĄbbi terĂŒletein vĂ©geztĂŒk eredmĂ©nyes kutatĂĄsokat: rekonstrukcĂł legyezĆnyalĂĄb-vetĂŒletekbĆl; geometriai tulajdonsĂĄgokon alapulĂł rekonsrukciĂłs Ă©s egyĂ©rtelmƱsĂ©gi eredmĂ©nyek kiterjeszthetĆsĂ©gĂ©nek vizsgĂĄlata; Ășjfajta geometriai jellemzĆk bevezetĂ©se; egzisztenica, unicitĂĄs Ă©s rekonstrukciĂł vizsgĂĄlata abszorpciĂłs vetĂŒletek esetĂ©n; 2D Ă©s 3D rekonstrukciĂłs algoritmusok fejlesztĂ©se neutron tomogrĂĄfiĂĄs alkalmazĂĄsokhoz; binĂĄris rekonstrukciĂłs algoritmusok tesztelĂ©se, benchmark halmazok Ă©s kiĂ©rtĂ©kelĂ©sek; a rekonstruĂĄlandĂł kĂ©p geometriai Ă©s egyĂ©b strukturĂĄlis informĂĄciĂłinak kinyerĂ©se közvetlenĂŒl a vetĂŒletekbĆl. A kidolgozott eljĂĄrĂĄsaink egy rĂ©szĂ©t az ĂĄltalunk fejlesztett DIRECT elnevezĂ©sƱ diszkrĂ©t tomogrĂĄfiai keretrendszerben implementĂĄltuk, Ăgy lehetĆsĂ©g nyĂlt az ismertetett eljĂĄrĂĄsok tesztelĂ©sĂ©re Ă©s a kĂŒlönbözĆ megközelĂtĂ©sek hatĂ©konysĂĄgĂĄnak összevetĂ©sĂ©re is. KutatĂĄsi eredmĂ©nyeinket több, mint 40 nemzetközi tudomĂĄnyos közlemĂ©nyben jelentettĂŒk meg, a projekt futamideje alatt kĂ©t rĂ©sztvevĆ kutatĂł is doktori fokozatot szerzett a kutatĂĄsi tĂ©mĂĄbĂłl. A projekt sorĂĄn több olyan kutatĂĄsi irĂĄnyvonalat fedtĂŒnk fel, ahol elkĂ©pzelĂ©seink szerint tovĂĄbbi jelentĆs elmĂ©leti eredmĂ©nyeket lehet elĂ©rni, Ă©s ezzel egyidĆben a gyakorlat szĂĄmĂĄra is Ășj jellegƱ Ă©s hatĂ©konyabb diszkrĂ©t kĂ©palkotĂł eljĂĄrĂĄsok tervezhetĆk Ă©s kivitelezhetĆk. | In the project entitled ""New Directions in Discrete Tomography and Its Applications in Neutron Radiography"" we did successful research mainly on the following topics on Discrete Tomography (DT): reconstruction from fan-beam projections; extension of uniqueness and reconstruction results of DT based on geometrical priors, introduction of new geometrical properties to facilitate the reconstruction; uniqueness and reconstruction in case of absorbed projections; 2D and 3D reconstruction algorithms for applications in neutron tomography; testing binary reconstruction algorithms, developing benchmark sets and evaluations; exploiting structural features of images from their projections. As a part of the project we implemented some of our reconstruction methods in the DIRECT framework (also developed at our department), thus making it possible to test and compare our algorithms. We published more than 40 articles in international conference proceedings and journals. Two of our project members obtained PhD degree during the period of the project (mostly based on their contributions to the work). We also discovered several research areas where further work can yield important theoretical results as well as more effective discrete reconstruction methods for the applications
A benchmark set for the reconstruction of hv-convex discrete sets
AbstractIn this paper we summarize the most important generation methods developed for the subclasses of hv-convex discrete sets. We also present some new generation techniques to complement the former ones thus making it possible to design a complete benchmark set for testing the performance of reconstruction algorithms on the class of hv-convex discrete sets and its subclasses. By using this benchmark set the paper also collects several statistics on hv-convex discrete sets, which are of great importance in the analysis of algorithms for reconstructing such kinds of discrete sets
Compressive Mining: Fast and Optimal Data Mining in the Compressed Domain
Real-world data typically contain repeated and periodic patterns. This
suggests that they can be effectively represented and compressed using only a
few coefficients of an appropriate basis (e.g., Fourier, Wavelets, etc.).
However, distance estimation when the data are represented using different sets
of coefficients is still a largely unexplored area. This work studies the
optimization problems related to obtaining the \emph{tightest} lower/upper
bound on Euclidean distances when each data object is potentially compressed
using a different set of orthonormal coefficients. Our technique leads to
tighter distance estimates, which translates into more accurate search,
learning and mining operations \textit{directly} in the compressed domain.
We formulate the problem of estimating lower/upper distance bounds as an
optimization problem. We establish the properties of optimal solutions, and
leverage the theoretical analysis to develop a fast algorithm to obtain an
\emph{exact} solution to the problem. The suggested solution provides the
tightest estimation of the -norm or the correlation. We show that typical
data-analysis operations, such as k-NN search or k-Means clustering, can
operate more accurately using the proposed compression and distance
reconstruction technique. We compare it with many other prevalent compression
and reconstruction techniques, including random projections and PCA-based
techniques. We highlight a surprising result, namely that when the data are
highly sparse in some basis, our technique may even outperform PCA-based
compression.
The contributions of this work are generic as our methodology is applicable
to any sequential or high-dimensional data as well as to any orthogonal data
transformation used for the underlying data compression scheme.Comment: 25 pages, 20 figures, accepted in VLD
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
Suppose we are given a vector in . How many linear measurements do
we need to make about to be able to recover to within precision
in the Euclidean () metric? Or more exactly, suppose we are
interested in a class of such objects--discrete digital signals,
images, etc; how many linear measurements do we need to recover objects from
this class to within accuracy ? This paper shows that if the objects
of interest are sparse or compressible in the sense that the reordered entries
of a signal decay like a power-law (or if the coefficient
sequence of in a fixed basis decays like a power-law), then it is possible
to reconstruct to within very high accuracy from a small number of random
measurements.Comment: 39 pages; no figures; to appear. Bernoulli ensemble proof has been
corrected; other expository and bibliographical changes made, incorporating
referee's suggestion
Phase Retrieval via Matrix Completion
This paper develops a novel framework for phase retrieval, a problem which
arises in X-ray crystallography, diffraction imaging, astronomical imaging and
many other applications. Our approach combines multiple structured
illuminations together with ideas from convex programming to recover the phase
from intensity measurements, typically from the modulus of the diffracted wave.
We demonstrate empirically that any complex-valued object can be recovered from
the knowledge of the magnitude of just a few diffracted patterns by solving a
simple convex optimization problem inspired by the recent literature on matrix
completion. More importantly, we also demonstrate that our noise-aware
algorithms are stable in the sense that the reconstruction degrades gracefully
as the signal-to-noise ratio decreases. Finally, we introduce some theory
showing that one can design very simple structured illumination patterns such
that three diffracted figures uniquely determine the phase of the object we
wish to recover
The Dantzig selector: Statistical estimation when is much larger than
In many important statistical applications, the number of variables or
parameters is much larger than the number of observations . Suppose then
that we have observations , where is a
parameter vector of interest, is a data matrix with possibly far fewer rows
than columns, , and the 's are i.i.d. . Is it
possible to estimate reliably based on the noisy data ? To estimate
, we introduce a new estimator--we call it the Dantzig selector--which
is a solution to the -regularization problem \min_{\tilde{\b
eta}\in\mathbf{R}^p}\|\tilde{\beta}\|_{\ell_1}\quad subject to\quad
\|X^*r\|_{\ell_{\infty}}\leq(1+t^{-1})\sqrt{2\log p}\cdot\sigma, where is
the residual vector and is a positive scalar. We show
that if obeys a uniform uncertainty principle (with unit-normed columns)
and if the true parameter vector is sufficiently sparse (which here
roughly guarantees that the model is identifiable), then with very large
probability, Our results are
nonasymptotic and we give values for the constant . Even though may be
much smaller than , our estimator achieves a loss within a logarithmic
factor of the ideal mean squared error one would achieve with an oracle which
would supply perfect information about which coordinates are nonzero, and which
were above the noise level. In multivariate regression and from a model
selection viewpoint, our result says that it is possible nearly to select the
best subset of variables by solving a very simple convex program, which, in
fact, can easily be recast as a convenient linear program (LP).Comment: This paper discussed in: [arXiv:0803.3124], [arXiv:0803.3126],
[arXiv:0803.3127], [arXiv:0803.3130], [arXiv:0803.3134], [arXiv:0803.3135].
Rejoinder in [arXiv:0803.3136]. Published in at
http://dx.doi.org/10.1214/009053606000001523 the Annals of Statistics
(http://www.imstat.org/aos/) by the Institute of Mathematical Statistics
(http://www.imstat.org
- âŠ