2,551 research outputs found

    Metaplectic Gabor Frames and Symplectic Analysis of Time-Frequency Spaces

    Full text link
    We introduce new frames, called \textit{metaplectic Gabor frames}, as natural generalizations of Gabor frames in the framework of metaplectic Wigner distributions. Namely, we develop the theory of metaplectic atoms in a full-general setting and prove an inversion formula for metaplectic Wigner distributions on Rd\mathbb{R}^d. Its discretization provides metaplectic Gabor frames. Next, we deepen the understanding of the so-called shift-invertible metaplectic Wigner distributions, showing that they can be represented, up to chirps, as rescaled short-time Fourier transforms. As an application, we derive a new characterization of modulation and Wiener amalgam spaces. Thus, these metaplectic distributions (and related frames) provide meaningful definitions of local frequencies and can be used to measure effectively the local frequency content of signals

    Information Encoding for Flow Watermarking and Binding Keys to Biometric Data

    Get PDF
    Due to the current level of telecommunications development, fifth-generation (5G) communication systems are expected to provide higher data rates, lower latency, and improved scalability. To ensure the security and reliability of data traffic generated from wireless sources, 5G networks must be designed to support security protocols and reliable communication applications. The operations of coding and processing of information during the transmission of both binary and non-binary data in nonstandard communication channels are described. A subclass of linear binary codes is considered, which are both Varshamov-Tenengolz codes and are used for channels with insertions and deletions of symbols. The use of these codes is compared with Hidden Markov Model (HMM)-based systems for detecting intrusions in networks using flow watermarking, which provide high true positive rate in both cases. The principles of using Bose-Chadhuri-Hocquenhgem (BCH) codes, non-binary Reed-Solomon codes, and turbo codes, as well as concatenated code structures to ensure noise immunity when reproducing information in Helper-Data Systems are considered. Examples of biometric systems organization based on the use of these codes, operating on the basis of the Fuzzy Commitment Scheme (FCS) and providing FRRĀ <Ā 1% for authentication, are given

    Marchenko-Lippmann-Schwinger inversion

    Get PDF
    Seismic wave reflections recorded at the Earthā€™s surface provide a rich source of information about the structure of the subsurface. These reflections occur due to changes in the material properties of the Earth; in the acoustic approximation, these are the density of the Earth and the velocity of seismic waves travelling through it. Therefore, there is a physical relationship between the material properties of the Earth and the reflected seismic waves that we observe at the surface. This relationship is non-linear, due to the highly scattering nature of the Earth, and to our inability to accurately reproduce these scattered waves with the low resolution velocity models that are usually available to us. Typically, we linearize the scattering problem by assuming that the waves are singly-scattered, requiring multiple reflections to be removed from recorded data at great effort and with varying degrees of success. This assumption is called the Born approximation. The equation that describes the relationship between the Earthā€™s properties and the fully-scattering reflection data is called the Lippmann-Schwinger equation, and this equation is linear if the full scattering wavefield inside the Earth could be known. The development of Marchenko methods makes such wavefields possible to estimate using only the surface reflection data and an estimate of the direct wave from the surface to each point in the Earth. Substituting the results from a Marchenko method into the Lippmann-Schwinger equation results in a linear equation that includes all orders of scattering. The aim of this thesis is to determine whether higher orders of scattering improve the linear inverse problem from data to velocities, by comparing linearized inversion under the Born approximation to the inversion of the linear Lippmann-Schwinger equation. This thesis begins by deriving the linear Lippmann-Schwinger and Born inverse problems, and reviewing the theoretical basis for Marchenko methods. By deriving the derivative of the full scattering Greenā€™s function with respect to the model parameters of the Earth, the gradient direction for a new type of least-squares full waveform inversion called Marchenko-Lippmann-Schwinger full waveform inversion is defined that uses all orders of scattering. By recreating the analytical 1D Born inversion of a boxcar perturbation by Beydoun and Tarantola (1988), it is shown that high frequency-sampling density is required to correctly estimate the amplitude of the velocity perturbation. More importantly, even when the scattered wavefield is defined to be singly-scattering and the velocity model perturbation can be found without matrix inversion, Born inversion cannot reproduce the true velocity structure exactly. When the results of analytical inversion are compared to inversions where the inverse matrices have been explicitly calculated, the analytical inversion is found to be superior. All three matrix inversion methods are found to be extremely ill-posed. With regularisation, it is possible to accurately determine the edges of the perturbation, but not the amplitude. Moving from a boxcar perturbation with a homogeneous starting velocity to a many-layered 1D model and a smooth representation of this model as the starting point, it is found that the inversion solution is highly dependent on the starting model. By optimising an iterative inversion in both the model and data domains, it is found that optimising the velocity model misfit does not guarantee improvement in the resulting data misfit, and vice versa. Comparing unregularised inversion to inversions with Tikhonov damping or smoothing applied to the kernel matrix, it is found that strong Tikhonov damping results in the most accurate velocity models. From the consistent under-performance of Lippmann-Schwinger inversion when using Marchenko-derived Greenā€™s functions compared to inversions carried out with true Greenā€™s functions, it is concluded that the fallibility of Marchenko methods results in inferior inversion results. Born and Lippmann-Schwinger inversion are tested on a 2D syncline model. Due to computational limitations, using all sources and receivers in the inversion required limiting the number of frequencies to 5. Without regularisation, the model update is uninterpretable due to the presence of strong oscillations across the model. With strong Tikhonov damping, the model updates obtained are poorly scaled, have low resolution, and low amplitude oscillatory noise remains. By replacing the inversion of all sources simultaneously with single source inversions, it is possible to reinstate all frequencies within our limited computational resources. These single source model updates can be stacked similarly to migration images to improve the overall model update. As predicted by the 1D analytical inversion, restoring the full frequency bandwidth eliminates the oscillatory noise from the inverse solution. With or without regularisation, Born and Lippmann-Schwinger inversion results are found to be nearly identical. When Marchenko-derived Greenā€™s functions are introduced, the inversion results are worse than either the Born inversion or the Lippmann-Schwinger inversion without Marchenko methods. On this basis, one concludes that the inclusion of higher order scattering does not improve the outcome of solving the linear inverse scattering problem using currently available methods. Nevertheless, some recent developments in the methods used to solve the Marchenko equation hold some promise for improving solutions in future

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This ļ¬fth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different ļ¬elds of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modiļ¬ed Proportional Conļ¬‚ict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classiļ¬ers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identiļ¬cation of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classiļ¬cation. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classiļ¬cation, and hybrid techniques mixing deep learning with belief functions as well

    Seamless Multimodal Biometrics for Continuous Personalised Wellbeing Monitoring

    Full text link
    Artificially intelligent perception is increasingly present in the lives of every one of us. Vehicles are no exception, (...) In the near future, pattern recognition will have an even stronger role in vehicles, as self-driving cars will require automated ways to understand what is happening around (and within) them and act accordingly. (...) This doctoral work focused on advancing in-vehicle sensing through the research of novel computer vision and pattern recognition methodologies for both biometrics and wellbeing monitoring. The main focus has been on electrocardiogram (ECG) biometrics, a trait well-known for its potential for seamless driver monitoring. Major efforts were devoted to achieving improved performance in identification and identity verification in off-the-person scenarios, well-known for increased noise and variability. Here, end-to-end deep learning ECG biometric solutions were proposed and important topics were addressed such as cross-database and long-term performance, waveform relevance through explainability, and interlead conversion. Face biometrics, a natural complement to the ECG in seamless unconstrained scenarios, was also studied in this work. The open challenges of masked face recognition and interpretability in biometrics were tackled in an effort to evolve towards algorithms that are more transparent, trustworthy, and robust to significant occlusions. Within the topic of wellbeing monitoring, improved solutions to multimodal emotion recognition in groups of people and activity/violence recognition in in-vehicle scenarios were proposed. At last, we also proposed a novel way to learn template security within end-to-end models, dismissing additional separate encryption processes, and a self-supervised learning approach tailored to sequential data, in order to ensure data security and optimal performance. (...)Comment: Doctoral thesis presented and approved on the 21st of December 2022 to the University of Port

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    DECONET: an Unfolding Network for Analysis-based Compressed Sensing with Generalization Error Bounds

    Full text link
    We present a new deep unfolding network for analysis-sparsity-based Compressed Sensing. The proposed network coined Decoding Network (DECONET) jointly learns a decoder that reconstructs vectors from their incomplete, noisy measurements and a redundant sparsifying analysis operator, which is shared across the layers of DECONET. Moreover, we formulate the hypothesis class of DECONET and estimate its associated Rademacher complexity. Then, we use this estimate to deliver meaningful upper bounds for the generalization error of DECONET. Finally, the validity of our theoretical results is assessed and comparisons to state-of-the-art unfolding networks are made, on both synthetic and real-world datasets. Experimental results indicate that our proposed network outperforms the baselines, consistently for all datasets, and its behaviour complies with our theoretical findings.Comment: Accepted in IEEE Transactions on Signal Processin

    Attosecond photoelectron interferometry: from wavepackets to density matrices

    Get PDF
    Through the advent of high-order harmonic generation and attosecond light pulses, photoionization dynamics has been studied on the attosecond time-scale, the intrinsic time-scale of such dynamics. When the electron leaves the atomic potential a phase shift is imprinted on the electron wavefunction. The measurement of this phase, together with amplitude allows us to determine the dynamics that of the photoionization.In this thesis, attosecond (10āˆ’18 s) and femtosecond (10āˆ’15 s) photoionization dynamics are studied using the photoelectron interferometry technique, Reconstruction of Attosecond Beating By Interference of two-photon Transitions (RABBIT). In RABBIT, the electron wave-packet is interfered with itself, and through this spectral interference, the spectral amplitude and phase can be retrieved.Attosecond time-delay measurements, are performed in argon and xenon where different aspects of electron correlation are investigated. In argon photoionization is studied in the region of the Cooper minumum, where the ionization cross section rapidly decrease. In xenon photoionization is studied across the 4d giant dipole resonance. Resonant dynamics is stud- ied using energy-resolved RABBIT. The studied resonances are the 1s3p, 1s4p, 1s5p (below threshold) and 2s2p (above threshold) in He and 3sāˆ’14p (above threshold) in Ar. Most of the measurements in the thesis are angular-integrated.If the photoelectron is prepared as a mixed state, RABBIT is unsuccessful in characterizing the quantum state of the electron, since it cannot be represented as a wavefunction. Therefore a quantum state tomography protocol for photoelectrons (KRAKEN) was developed and tested experimentally in non-resonant ionization of helium, neon and argon. In the case of neon and argon, due to spin-orbit splitting, the entanglement between the photoelectron and ion leads to decoherence induced by incomplete measurements where the state of the ion is not measured
    • ā€¦
    corecore