15 research outputs found

    Extreme Value Analysis of Empirical Frame Coefficients and Implications for Denoising by Soft-Thresholding

    Full text link
    Denoising by frame thresholding is one of the most basic and efficient methods for recovering a discrete signal or image from data that are corrupted by additive Gaussian white noise. The basic idea is to select a frame of analyzing elements that separates the data in few large coefficients due to the signal and many small coefficients mainly due to the noise \epsilon_n. Removing all data coefficients being in magnitude below a certain threshold yields a reconstruction of the original signal. In order to properly balance the amount of noise to be removed and the relevant signal features to be kept, a precise understanding of the statistical properties of thresholding is important. For that purpose we derive the asymptotic distribution of max_{\omega \in \Omega_n} || for a wide class of redundant frames (\phi_\omega^n: \omega \in \Omega_n}. Based on our theoretical results we give a rationale for universal extreme value thresholding techniques yielding asymptotically sharp confidence regions and smoothness estimates corresponding to prescribed significance levels. The results cover many frames used in imaging and signal recovery applications, such as redundant wavelet systems, curvelet frames, or unions of bases. We show that `generically' a standard Gumbel law results as it is known from the case of orthonormal wavelet bases. However, for specific highly redundant frames other limiting laws may occur. We indeed verify that the translation invariant wavelet transform shows a different asymptotic behaviour.Comment: [Content: 39 pages, 4 figures] Note that in this version 4 we have slightely changed the title of the paper and we have rewritten parts of the introduction. Except for corrected typos the other parts of the paper are the same as the original versions

    Frames of multi-windowed exponentials on subsets of Rd{\mathbb R}^d

    Full text link
    Given discrete subsets ΛjRd\Lambda_j\subset {\Bbb R}^d, j=1,...,qj=1,...,q, consider the set of windowed exponentials j=1q{gj(x)e2πi:λΛj}\bigcup_{j=1}^{q}\{g_j(x)e^{2\pi i }: \lambda\in\Lambda_j\} on L2(Ω)L^2(\Omega). We show that a necessary and sufficient condition for the windows gjg_j to form a frame of windowed exponentials for L2(Ω)L^2(\Omega) with some Λj\Lambda_j is that mmaxjJgjMm\leq \max_{j\in J}|g_j|\leq M almost everywhere on Ω\Omega for some subset JJ of {1,...,q}\{1,..., q\}. If Ω\Omega is unbounded, we show that there is no frame of windowed exponentials if the Lebesgue measure of Ω\Omega is infinite. If Ω\Omega is unbounded but of finite measure, we give a sufficient condition for the existence of Fourier frames on L2(Ω)L^2(\Omega). At the same time, we also construct examples of unbounded sets with finite measure that have no tight exponential frame

    On Polytopes Arising in Cluster Algebras & Finite Frames

    Get PDF
    Polytopes appear in many contexts, two being cluster algebras and finite frames. At first we study graph theoretic properties of polytopes arising in the context of cluster algebras of finite type. We introduce the basic terms and constructions for cluster algebras of finite type, then we consider their exchange graphs and give a conjecture about the Hamiltonicity of the exchange graphs. Then we study polytopes, which arise in the construction of finite frames with given lengths of frame vectors and given spectrum of the frame operator. After an introduction to finite frames, we give a non-redundant description of those polytopes for equal norm tight frames in terms of equations and inequalities. From this, we derive the dimension and number of facets of the polytopes. In this process we combinatorially obtain two isomorphisms between polytopes associated to frames. Afterwards we discuss how these isomorphisms are described by reversing the order of frame vectors and taking Naimark complements

    Non-stationary Sibling Wavelet Frames on Bounded Intervals

    Get PDF
    Frame Theory is a modern branch of Harmonic Analysis. It has its roots in Communication Theory and Quantum Mechanics. Frames are overcomplete and stable families of functions which provide non-unique and non-orthogonal series representations for each element of the space. The first milestone was set 1946 by Gabor with the paper ''Theory of communications''. He formulated a fundamental approach to signal decomposition in terms of elementary signals generated by translations and modulations of a Gaussian. The frames for Hilbert spaces were formally defined for the first time 1952 by Duffin and Schaeffer in their fundamental paper ''A class of nonharmonic Fourier series''. The breakthrough of frames came 1986 with Daubechies, Grossmann and Meyer's paper ''Painless nonorthogonal expansions''. Since then a lot of scientists have been investigating frames from different points of view. In this thesis we study non-stationary sibling frames, in general, and the possibility to construct such function families in spline spaces, in particular. Our work follows a theoretical, constructive track. Nonetheless, as demonstrated by several papers by Daubechies and other authors, frames are very useful in various areas of Applied Mathematics, including Signal and Image Processing, Data Compression and Signal Detection. The overcompleteness of the system incorporates redundant information in the frame coefficients. In certain applications one can take advantage of these correlations. The content of this thesis can be split naturally into three parts: Chapters 1-3 introduce basic definitions, necessary notations and classical results from the General Frame Theory, from B-Spline Theory and on non-stationary tight wavelet spline frames. Chapters 4-5 describe the theory we developed for sibling frames on an abstract level. The last chapter presents our explicit construction of a certain class of non-stationary sibling spline frames with vanishing moments in L_2[a,b] which exemplifies and thus proves the applicability of our theoretical results from Chapters 4-5. As a principle of writing we did the best possible to make this thesis self-contained. Classical handbooks, recent monographs, fundamental research papers and survey articles from Wavelet, Frame and Spline Theory are cited for further - more detailed - reading. In Chapter 3 we summarize the considerations on normalized tight spline frames of Chui, He and Stoeckler and some of the results from their article ''Nonstationary tight wavelet frames. I: Bounded intervals'' (see Appl. Comp. Harm. Anal. 17 (2004), 141-197). Our work detailed in Chapters 4-6 is meant to extend and supplement their theory for bounded intervals. Chapter 4 deals with our extension of the general construction principle of non-stationary wavelet frames from the tight case to the non-tight (= sibling) case on which our present work focuses. We apply this principle in Chapter 6 in order to give a general construction scheme for certain non-stationary sibling spline frames of order m with L vanishing moments (m \in N, m >= 2, 1 <= L <= m), as well as some concrete illustrative examples. Chapter 4 presents in Subsection 4.3.3 the motivation for our detailed investigations in Chapter 5. We need some sufficient conditions on the function families defined by coefficient matrices which are as simple as possible, in order to be able to verify easily if some concrete spline families are Bessel families (and thus sibling frames) or not. In Chapter 5 we develop general strategies for proving the boundedness of certain linear operators. These will enable us to check in Chapter 6 the Bessel condition for concrete spline systems which are our candidates for sibling spline frames. Some results concerning multivariate Bessel families are also included

    Global Optimality via Tight Convex Relaxations for Pose Estimation in Geometric 3D Computer Vision

    Get PDF
    In this thesis, we address a set of fundamental problems whose core difficulty boils down to optimizing over 3D poses. This includes many geometric 3D registration problems, covering well-known problems with a long research history such as the Perspective-n-Point (PnP) problem and generalizations, extrinsic sensor calibration, or even the gold standard for Structure from Motion (SfM) pipelines: The relative pose problem from corresponding features. Likewise, this is also the case for a close relative of SLAM, Pose Graph Optimization (also commonly known as Motion Averaging in SfM). The crux of this thesis contribution revolves around the successful characterization and development of empirically tight (convex) semidefinite relaxations for many of the aforementioned core problems of 3D Computer Vision. Building upon these empirically tight relaxations, we are able to find and certify the globally optimal solution to these problems with algorithms whose performance ranges as of today from efficient, scalable approaches comparable to fast second-order local search techniques to polynomial time (worst case). So, to conclude, our research reveals that an important subset of core problems that has been historically regarded as hard and thus dealt with mostly in empirical ways, are indeed tractable with optimality guarantees.Artificial Intelligence (AI) drives a lot of services and products we use everyday. But for AI to bring its full potential into daily tasks, with technologies such as autonomous driving, augmented reality or mobile robots, AI needs to be not only intelligent but also perceptive. In particular, the ability to see and to construct an accurate model of the environment is an essential capability to build intelligent perceptive systems. The ideas developed in Computer Vision for the last decades in areas such as Multiple View Geometry or Optimization, put together to work into 3D reconstruction algorithms seem to be mature enough to nurture a range of emerging applications that already employ as of today 3D Computer Vision in the background. However, while there is a positive trend in the use of 3D reconstruction tools in real applications, there are also some fundamental limitations regarding reliability and performance guarantees that may hinder a wider adoption, e.g. in more critical applications involving people's safety such as autonomous navigation. State-of-the-art 3D reconstruction algorithms typically formulate the reconstruction problem as a Maximum Likelihood Estimation (MLE) instance, which entails solving a high-dimensional non-convex non-linear optimization problem. In practice, this is done via fast local optimization methods, that have enabled fast and scalable reconstruction pipelines, yet lack of guarantees on most of the building blocks leaving us with fundamentally brittle pipelines where no guarantees exist

    Noise Covariance Properties in Dual-Tree Wavelet Decompositions

    Get PDF
    Dual-tree wavelet decompositions have recently gained much popularity, mainly due to their ability to provide an accurate directional analysis of images combined with a reduced redundancy. When the decomposition of a random process is performed -- which occurs in particular when an additive noise is corrupting the signal to be analyzed -- it is useful to characterize the statistical properties of the dual-tree wavelet coefficients of this process. As dual-tree decompositions constitute overcomplete frame expansions, correlation structures are introduced among the coefficients, even when a white noise is analyzed. In this paper, we show that it is possible to provide an accurate description of the covariance properties of the dual-tree coefficients of a wide-sense stationary process. The expressions of the (cross-)covariance sequences of the coefficients are derived in the one and two-dimensional cases. Asymptotic results are also provided, allowing to predict the behaviour of the second-order moments for large lag values or at coarse resolution. In addition, the cross-correlations between the primal and dual wavelets, which play a primary role in our theoretical analysis, are calculated for a number of classical wavelet families. Simulation results are finally provided to validate these results

    The NA62 VetoCounter sub-detector readout system upgrade and performance evaluation.

    Get PDF
    NA62 è un medio esperimento di fisica delle particelle ubicato alla North Area beam facility al CERN. L'obiettivo principale di NA62 è lo studio del decadimento kaonico ultra raro PNN (K + → π + ν ν̄). A causa del suo branching ratio estremamente ridotto risulta fondamentale avere un efficace sistema di vetoing per muoni e fotoni; la maggior parte dei rivelatori di NA62 sono appunto dedicati a questo scopo. Nel 2020 un nuovo subrivelatore chiamato Veto Counter è stato installato. Questo dispositivo è progettato per ridurre i contributi spuri causati dalle interazioni secondarie del fascio con gli elementi della linea. NA62 utilizza attualmente la scheda TEL62 come spina dorsale del suo sistema di acquisizione dati (DAQ), questa è una versione altamente migliorata e aggiornata della scheda TELL1 originariamente sviluppata per LHCb. In seguito ai crescenti requisiti dell'esperimento, i limiti del sistema di lettura basato su TEL62 sono sempre più evidenti. Tra questi, la scarsa resistenza alle radiazioni e i limiti sulla velocità di lettura dei moduli di conversione tempo-digitale (TDC) utilizzati. Si prevede quindi un aggiornamento del sistema di lettura utilizzando un nuovo TDC progettato per interfacciarsi con la piattaforma FELIX, sviluppata per l'esperimento ATLAS. Il Veto Counter è dotato di un doppio sistema di acquisizione TEL62 e FELIX e viene utilizzato come banco di prova per la valutazione del nuovo readout. In questo lavoro vengono studiate le prestazioni del Veto Counter e viene eseguito un confronto tra il sistema basato su TEL62 e quello basato su FELIX. Inoltre, viene progettato e testato sulla piattaforma del Veto Counter un sistema di controllo per il TDC FELIX. Nel Capitolo 1 verrà fornita un'introduzione generale sulla fisica studiata a NA62. Per raggiungere i suoi obiettivi sperimentali, NA62 richiede un progetto specifico; nel Capitolo 2 verranno dunque discusse le scelte progettuali più rilevanti per quanto riguarda il trigger e la struttura sperimentale. Nel Capitolo 3 viene fornita una panoramica dei principali sub-rilevatori, mentre nel Capitolo 4 viene fornita una descrizione dell'attuale sistema di trigger e acquisizione dati di NA62. Il lavoro di aggiornamento del sistema di lettura FELIX sarà descritto in dettaglio nel capitolo 5, insieme a una descrizione delle principali limitazioni del sistema basato su TEL62. L'aggiornamento, quando completato, coinvolgerà tutti i sub-rivelatori di NA62 che attualmente utilizzano un sistema TEL62. La progettazione del software del sistema di controllo FELIX, sviluppato in questo lavoro, è descritta in dettaglio nel Capitolo 6. Il Capitolo 7 descrive invece il subrivelatore Veto Counter, banco di prova per la nuova tecnologia di lettura e dotato di un doppio sistema TEL62-FELIX. Infine, nel Capitolo 8 viene presentato lo studio delle prestazioni del Veto Counter e del suo readout. Nell'Appendice C sono inoltre presentate alcune osservazioni su un possibile utilizzo del Veto Counter per aiutare l'estrapolazione a monte delle tracce.NA62 is a medium-sized particle physics experiment located in the CERN North Area beam facility. Its main objective is to study the ultra rare kaon decay channel PNN (K + → π + ν ν̄). Given the extremely low branching ratio of the signal, muon and photon vetoing is of paramount importance and most of the subdetectors in NA62 are dedicated to this function. In 2020, a new sub-detector named Veto Counter has been installed. This device is designed to reduce the spurious contributions of secondary interactions happening within the beam line. NA62 currently uses the TEL62 digital board as the backbone of its data acquisition (DAQ) system, which is a highly improved and upgraded version of the TELL1 board originally developed for LHCb. Following the ever growing requirements of the NA62 experiment, the limitations of the TEL62-based readout are showing more and more. These include lack of radiation hardness and readout rate limitations of the aging Time to Digital converter modules used. An upgrade of the readout system using a new custom time to digital converter (TDC) in combination with the modern FELIX card developed for the ATLAS experiment is thus envisioned. The Veto Counter is equipped with a dual TEL62 and FELIX readout and is used as a test bench in the evaluation of the new readout. In this work the performance of the Veto Counter is studied and a comparison between TEL62 and FELIX readout is performed. Furthermore a control system for the newly developed FELIX TDC is designed and tested on the Veto Counter platform. A general introduction on the physics studied at NA62 will be given in Chapter 1. In order to fulfill its experimental goals NA62 requires a specific design; in Chapter 2 we will discuss the most relevant design choices regarding trigger and experimental structure. An overview of the main sub-detectors is given in Chapter 3 while in Chapter 4 a description of the current Trigger and Data Acquisition system of NA62 is given. The FELIX readout upgrade effort will be detailed in Chapter 5, together with a description of the main limitations of the TEL62 readout. This upgrade is expected to involve all the sub-detectors at NA62 currently using a TEL62 system. Software design of the FELIX control system, developed in this work is described in depth in Chapter 6. Chapter 7 will give a description of the Veto Counter subdetector; being a test bench for the new readout technology it features a dual TEL62-FELIX readout. Finally, the performance study of the Veto counter and its readout is presented in Chapter 8. As an aside, in Appendix C some observation on a possible use of the Veto Counter to aid in upstream track extrapolation are presented

    Quantum Physics, Relativity, and Complex Spacetime: Towards a New Synthesis

    Full text link
    The positivity of the energy in relativistic quantum mechanics implies that wave functions can be continued analytically to the forward tube T in complex spacetime. For Klein-Gordon particles, we interpret T as an extended (8D) classical phase space containing all 6D classical phase spaces as symplectic submanifolds. The evaluation maps ez:ff(z)e_z: f\to f(z) of wave functions on T are relativistic coherent states reducing to the Gaussian coherent states in the nonrelativistic limit. It is known that no covariant probability interpretation exists for Klein-Gordon particles in real spacetime because the time component of the conserved "probability current" can attain negative values even for positive-energy solutions. We show that this problem is solved very naturally in complex spacetime, where f(xiy)2|f(x-iy)|^2 is interpreted as a probability density on all 6D phase spaces in T which, when integrated over the "momentum" variables y, gives a conserved spacetime probability current whose time component is a positive regularization of the usual one. Similar results are obtained for Dirac particles, where the evaluation maps eze_z are spinor-valued relativistic coherent states. For free quantized Klein-Gordon and Dirac fields, the above formalism extends to n-particle/antiparticle coherent states whose scalar products are Wightman functions. The 2-point function plays the role of a reproducing kernel for the one-particle and antiparticle subspaces.Comment: 252 pages, no figures. Originally published as a book by North-Holland, 1990. Reviewed by Robert Hermann in Bulletin of the AMS Vol. 28 #1, January 1993, pp. 130-132; see http://wavelets.co
    corecore