226 research outputs found
Compressed Sensing Using Binary Matrices of Nearly Optimal Dimensions
In this paper, we study the problem of compressed sensing using binary
measurement matrices and -norm minimization (basis pursuit) as the
recovery algorithm. We derive new upper and lower bounds on the number of
measurements to achieve robust sparse recovery with binary matrices. We
establish sufficient conditions for a column-regular binary matrix to satisfy
the robust null space property (RNSP) and show that the associated sufficient
conditions % sparsity bounds for robust sparse recovery obtained using the RNSP
are better by a factor of compared to the
sufficient conditions obtained using the restricted isometry property (RIP).
Next we derive universal \textit{lower} bounds on the number of measurements
that any binary matrix needs to have in order to satisfy the weaker sufficient
condition based on the RNSP and show that bipartite graphs of girth six are
optimal. Then we display two classes of binary matrices, namely parity check
matrices of array codes and Euler squares, which have girth six and are nearly
optimal in the sense of almost satisfying the lower bound. In principle,
randomly generated Gaussian measurement matrices are "order-optimal". So we
compare the phase transition behavior of the basis pursuit formulation using
binary array codes and Gaussian matrices and show that (i) there is essentially
no difference between the phase transition boundaries in the two cases and (ii)
the CPU time of basis pursuit with binary matrices is hundreds of times faster
than with Gaussian matrices and the storage requirements are less. Therefore it
is suggested that binary matrices are a viable alternative to Gaussian matrices
for compressed sensing using basis pursuit. \end{abstract}Comment: 28 pages, 3 figures, 5 table
Deterministic Constructions of Binary Measurement Matrices from Finite Geometry
Deterministic constructions of measurement matrices in compressed sensing
(CS) are considered in this paper. The constructions are inspired by the recent
discovery of Dimakis, Smarandache and Vontobel which says that parity-check
matrices of good low-density parity-check (LDPC) codes can be used as
{provably} good measurement matrices for compressed sensing under
-minimization. The performance of the proposed binary measurement
matrices is mainly theoretically analyzed with the help of the analyzing
methods and results from (finite geometry) LDPC codes. Particularly, several
lower bounds of the spark (i.e., the smallest number of columns that are
linearly dependent, which totally characterizes the recovery performance of
-minimization) of general binary matrices and finite geometry matrices
are obtained and they improve the previously known results in most cases.
Simulation results show that the proposed matrices perform comparably to,
sometimes even better than, the corresponding Gaussian random matrices.
Moreover, the proposed matrices are sparse, binary, and most of them have
cyclic or quasi-cyclic structure, which will make the hardware realization
convenient and easy.Comment: 12 pages, 11 figure
A Simple Message-Passing Algorithm for Compressed Sensing
We consider the recovery of a nonnegative vector x from measurements y = Ax,
where A is an m-by-n matrix whos entries are in {0, 1}. We establish that when
A corresponds to the adjacency matrix of a bipartite graph with sufficient
expansion, a simple message-passing algorithm produces an estimate \hat{x} of x
satisfying ||x-\hat{x}||_1 \leq O(n/k) ||x-x(k)||_1, where x(k) is the best
k-sparse approximation of x. The algorithm performs O(n (log(n/k))^2 log(k))
computation in total, and the number of measurements required is m = O(k
log(n/k)). In the special case when x is k-sparse, the algorithm recovers x
exactly in time O(n log(n/k) log(k)). Ultimately, this work is a further step
in the direction of more formally developing the broader role of
message-passing algorithms in solving compressed sensing problems
Practical High-Throughput, Non-Adaptive and Noise-Robust SARS-CoV-2 Testing
We propose a compressed sensing-based testing approach with a practical
measurement design and a tuning-free and noise-robust algorithm for detecting
infected persons. Compressed sensing results can be used to provably detect a
small number of infected persons among a possibly large number of people. There
are several advantages of this method compared to classical group testing.
Firstly, it is non-adaptive and thus possibly faster to perform than adaptive
methods which is crucial in exponentially growing pandemic phases. Secondly,
due to nonnegativity of measurements and an appropriate noise model, the
compressed sensing problem can be solved with the non-negative least absolute
deviation regression (NNLAD) algorithm. This convex tuning-free program
requires the same number of tests as current state of the art group testing
methods. Empirically it performs significantly better than theoretically
guaranteed, and thus the high-throughput, reducing the number of tests to a
fraction compared to other methods. Further, numerical evidence suggests that
our method can correct sparsely occurring errors.Comment: 8 Pages, 1 Figur
Sparse graph codes for compression, sensing, and secrecy
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from student PDF version of thesis.Includes bibliographical references (p. 201-212).Sparse graph codes were first introduced by Gallager over 40 years ago. Over the last two decades, such codes have been the subject of intense research, and capacity approaching sparse graph codes with low complexity encoding and decoding algorithms have been designed for many channels. Motivated by the success of sparse graph codes for channel coding, we explore the use of sparse graph codes for four other problems related to compression, sensing, and security. First, we construct locally encodable and decodable source codes for a simple class of sources. Local encodability refers to the property that when the original source data changes slightly, the compression produced by the source code can be updated easily. Local decodability refers to the property that a single source symbol can be recovered without having to decode the entire source block. Second, we analyze a simple message-passing algorithm for compressed sensing recovery, and show that our algorithm provides a nontrivial f1/f1 guarantee. We also show that very sparse matrices and matrices whose entries must be either 0 or 1 have poor performance with respect to the restricted isometry property for the f2 norm. Third, we analyze the performance of a special class of sparse graph codes, LDPC codes, for the problem of quantizing a uniformly random bit string under Hamming distortion. We show that LDPC codes can come arbitrarily close to the rate-distortion bound using an optimal quantizer. This is a special case of a general result showing a duality between lossy source coding and channel coding-if we ignore computational complexity, then good channel codes are automatically good lossy source codes. We also prove a lower bound on the average degree of vertices in an LDPC code as a function of the gap to the rate-distortion bound. Finally, we construct efficient, capacity-achieving codes for the wiretap channel, a model of communication that allows one to provide information-theoretic, rather than computational, security guarantees. Our main results include the introduction of a new security critertion which is an information-theoretic analog of semantic security, the construction of capacity-achieving codes possessing strong security with nearly linear time encoding and decoding algorithms for any degraded wiretap channel, and the construction of capacity-achieving codes possessing semantic security with linear time encoding and decoding algorithms for erasure wiretap channels. Our analysis relies on a relatively small set of tools. One tool is density evolution, a powerful method for analyzing the behavior of message-passing algorithms on long, random sparse graph codes. Another concept we use extensively is the notion of an expander graph. Expander graphs have powerful properties that allow us to prove adversarial, rather than probabilistic, guarantees for message-passing algorithms. Expander graphs are also useful in the context of the wiretap channel because they provide a method for constructing randomness extractors. Finally, we use several well-known isoperimetric inequalities (Harper's inequality, Azuma's inequality, and the Gaussian Isoperimetric inequality) in our analysis of the duality between lossy source coding and channel coding.by Venkat Bala Chandar.Ph.D
Virtuaalse proovikabiini 3D kehakujude ja roboti juhtimisalgoritmide uurimine
Väitekirja elektrooniline versioon ei sisalda publikatsiooneVirtuaalne riiete proovimine on üks põhilistest teenustest, mille pakkumine võib suurendada rõivapoodide edukust, sest tänu sellele lahendusele väheneb füüsilise töö vajadus proovimise faasis ning riiete proovimine muutub kasutaja jaoks mugavamaks. Samas pole enamikel varem välja pakutud masinnägemise ja graafika meetoditel õnnestunud inimkeha realistlik modelleerimine, eriti terve keha 3D modelleerimine, mis vajab suurt kogust andmeid ja palju arvutuslikku ressurssi. Varasemad katsed on ebaõnnestunud põhiliselt seetõttu, et ei ole suudetud korralikult arvesse võtta samaaegseid muutusi keha pinnal. Lisaks pole varasemad meetodid enamasti suutnud kujutiste liikumisi realistlikult reaalajas visualiseerida. Käesolev projekt kavatseb kõrvaldada eelmainitud puudused nii, et rahuldada virtuaalse proovikabiini vajadusi. Välja pakutud meetod seisneb nii kasutaja keha kui ka riiete skaneerimises, analüüsimises, modelleerimises, mõõtmete arvutamises, orientiiride paigutamises, mannekeenidelt võetud 3D visuaalsete andmete segmenteerimises ning riiete mudeli paigutamises ja visualiseerimises kasutaja kehal. Selle projekti käigus koguti visuaalseid andmeid kasutades 3D laserskannerit ja Kinecti optilist kaamerat ning koostati nendest andmebaas. Neid andmeid kasutati välja töötatud algoritmide testimiseks, mis peamiselt tegelevad riiete realistliku visuaalse kujutamisega inimkehal ja suuruse pakkumise süsteemi täiendamisega virtuaalse proovikabiini kontekstis.Virtual fitting constitutes a fundamental element of the developments expected to rise the commercial prosperity of online garment retailers to a new level, as it is expected to reduce the load of the manual labor and physical efforts required. Nevertheless, most of the previously proposed computer vision and graphics methods have failed to accurately and realistically model the human body, especially, when it comes to the 3D modeling of the whole human body. The failure is largely related to the huge data and calculations required, which in reality is caused mainly by inability to properly account for the simultaneous variations in the body surface. In addition, most of the foregoing techniques cannot render realistic movement representations in real-time. This project intends to overcome the aforementioned shortcomings so as to satisfy the requirements of a virtual fitting room. The proposed methodology consists in scanning and performing some specific analyses of both the user's body and the prospective garment to be virtually fitted, modeling, extracting measurements and assigning reference points on them, and segmenting the 3D visual data imported from the mannequins. Finally, superimposing, adopting and depicting the resulting garment model on the user's body. The project is intended to gather sufficient amounts of visual data using a 3D laser scanner and the Kinect optical camera, to manage it in form of a usable database, in order to experimentally implement the algorithms devised. The latter will provide a realistic visual representation of the garment on the body, and enhance the size-advisor system in the context of the virtual fitting room under study
Recommended from our members
Coding Theory
Coding theory lies naturally at the intersection of a large number of disciplines in pure and applied mathematics: algebra and number theory, probability theory and statistics, communication theory, discrete mathematics and combinatorics, complexity theory, and statistical physics. The workshop on coding theory covered many facets of the recent research advances
- …