1,197 research outputs found

    Adaptive filters for sparse system identification

    Get PDF
    Sparse system identification has attracted much attention in the field of adaptive algorithms, and the adaptive filters for sparse system identification are studied. Firstly, a new family of proportionate normalized least mean square (PNLMS) adaptive algorithms that improve the performance of identifying block-sparse systems is proposed. The main proposed algorithm, called block-sparse PNLMS (BS-PNLMS), is based on the optimization of a mixed â„“2,1 norm of the adaptive filter\u27s coefficients. A block-sparse improved PNLMS (BS-IPNLMS) is also derived for both sparse and dispersive impulse responses. Meanwhile, the proposed block-sparse proportionate idea has been extended to both the proportionate affine projection algorithm (PAPA) and the proportionate affine projection sign algorithm (PAPSA). Secondly, a generalized scheme for a family of proportionate algorithms is also presented based on convex optimization. Then a novel low-complexity reweighted PAPA is derived from this generalized scheme which could achieve both better performance and lower complexity than previous ones. The sparseness of the channel is taken into account to improve the performance for dispersive system identification. Meanwhile, the memory of the filter\u27s coefficients is combined with row action projections (RAP) to significantly reduce the computational complexity. Finally, two variable step-size zero-point attracting projection (VSS-ZAP) algorithms for sparse system identification are proposed. The proposed VSS-ZAPs are based on the approximations of the difference between the sparseness measure of current filter coefficients and the real channel, which could gain lower steady-state misalignment and also track the change in the sparse system --Abstract, page iv

    Quantum State Tomography with a Single Observable

    Full text link
    Quantum information has been drawing a wealth of research in recent years, shedding light on questions at the heart of quantum mechanics, as well as advancing fields such as complexity theory, cryptography, key distribution, and chemistry. These fundamental and applied aspects of quantum information rely on a crucial issue: the ability to characterize a quantum state from measurements, through a process called Quantum State Tomography (QST). However, QST requires a large number of measurements, each derived from a different physical observable corresponding to a different experimental setup. Unfortunately, changing the setup results in unwanted changes to the data, prolongs the measurement and impairs the assumptions that are always made about the stationarity of the noise. Here, we propose to overcome these drawbacks by performing QST with a single observable. A single observable can often be realized by a single setup, thus considerably reducing the experimental effort. In general, measurements of a single observable do not hold enough information to recover the quantum state. We overcome this lack of information by relying on concepts inspired by Compressed Sensing (CS), exploiting the fact that the sought state - in many applications of quantum information - is close to a pure state (and thus has low rank). Additionally, we increase the system dimension by adding an ancilla that couples to information evolving in the system, thereby providing more measurements, enabling the recovery of the original quantum state from a single-observable measurements. We demonstrate our approach on multi-photon states by recovering structured quantum states from a single observable, in a single experimental setup. We further show how this approach can be used to recover quantum states without number-resolving detectors

    Fourteenth Biennial Status Report: März 2017 - February 2019

    No full text

    Sub-Nyquist Sampling: Bridging Theory and Practice

    Full text link
    Sampling theory encompasses all aspects related to the conversion of continuous-time signals to discrete streams of numbers. The famous Shannon-Nyquist theorem has become a landmark in the development of digital signal processing. In modern applications, an increasingly number of functions is being pushed forward to sophisticated software algorithms, leaving only those delicate finely-tuned tasks for the circuit level. In this paper, we review sampling strategies which target reduction of the ADC rate below Nyquist. Our survey covers classic works from the early 50's of the previous century through recent publications from the past several years. The prime focus is bridging theory and practice, that is to pinpoint the potential of sub-Nyquist strategies to emerge from the math to the hardware. In that spirit, we integrate contemporary theoretical viewpoints, which study signal modeling in a union of subspaces, together with a taste of practical aspects, namely how the avant-garde modalities boil down to concrete signal processing systems. Our hope is that this presentation style will attract the interest of both researchers and engineers in the hope of promoting the sub-Nyquist premise into practical applications, and encouraging further research into this exciting new frontier.Comment: 48 pages, 18 figures, to appear in IEEE Signal Processing Magazin

    Content-based Information Retrieval via Nearest Neighbor Search

    Get PDF
    Content-based information retrieval (CBIR) has attracted significant interest in the past few years. When given a search query, the search engine will compare the query with all the stored information in the database through nearest neighbor search. Finally, the system will return the most similar items. We contribute to the CBIR research the following: firstly, Distance Metric Learning (DML) is studied to improve retrieval accuracy of nearest neighbor search. Additionally, Hash Function Learning (HFL) is considered to accelerate the retrieval process. On one hand, a new local metric learning framework is proposed - Reduced-Rank Local Metric Learning (R2LML). By considering a conical combination of Mahalanobis metrics, the proposed method is able to better capture information like data\u27s similarity and location. A regularization to suppress the noise and avoid over-fitting is also incorporated into the formulation. Based on the different methods to infer the weights for the local metric, we considered two frameworks: Transductive Reduced-Rank Local Metric Learning (T-R2LML), which utilizes transductive learning, while Efficient Reduced-Rank Local Metric Learning (E-R2LML)employs a simpler and faster approximated method. Besides, we study the convergence property of the proposed block coordinate descent algorithms for both our frameworks. The extensive experiments show the superiority of our approaches. On the other hand, *Supervised Hash Learning (*SHL), which could be used in supervised, semi-supervised and unsupervised learning scenarios, was proposed in the dissertation. By considering several codewords which could be learned from the data, the proposed method naturally derives to several Support Vector Machine (SVM) problems. After providing an efficient training algorithm, we also study the theoretical generalization bound of the new hashing framework. In the final experiments, *SHL outperforms many other popular hash function learning methods. Additionally, in order to cope with large data sets, we also conducted experiments running on big data using a parallel computing software package, namely LIBSKYLARK
    • …
    corecore