32 research outputs found

    Micro Signal Extraction and Analytics

    Get PDF
    This dissertation studies the extraction of signals that have smaller magnitudes—typically one order of magnitude or more—than the dominating signals, or the extraction of signals that have a smaller topological scale than what conventional algorithms resolve. We name such a problem the micro signal extraction problem. The micro signal extraction problem is challenging due to the relatively low signal strength. In terms of relative magnitude, the micro signal of interest may very well be considered as one signal within a group of many types of tiny, nuisance signals, such as sensor noise and quantization noise. This group of nuisance signals is usually considered as the “noisy,” unwanted component in contrast to the “signal” component dominating the multimedia content. To extract the micro signal that has much smaller magnitude than the dominating signal and simultaneously to protect it from being corrupted by other nuisance signals, one usually has to tackle the problem with extra caution: the modeling assumptions behind a proposed extraction algorithm needs to be closely calibrated with the behavior of the multimedia data. In this dissertation, we tackle three micro signal extraction problems by synergistically applying and adapting signal processing theories and techniques. In the first part of the dissertation, we use mobile imaging to extract a collection of directions of microscopic surfaces as a unique identifier for authentication and counterfeit detection purposes. This is the first work showing that the 3-D structure at the microscopic level can be precisely estimated using techniques related to the photometric stereo. By enabling the mobile imaging paradigm, we have significantly reduced the barriers for extending the counterfeit detection system to end users. In the second part of the dissertation, we explore the possibility of extracting the Electric Network Frequency (ENF) signal from a single image. This problem is much more challenging compared to its audio and video counterparts, as the duration and the magnitude of the embedded signal are both very small. We investigate and show how the detectability of the ENF signal changes as a function of the magnitude of the embedded ENF signal. In the last part of the dissertation, we study the problem of heart-rate from fitness exercise videos, which is challenging due to the existence of fitness motions. We show that a highly precise motion compensation scheme is the key to a reliable heart-rate extraction system

    Revealing Information by Averaging

    Get PDF
    We present a method for hiding images in synthetic videos and reveal them by temporal averaging. The main challenge is to develop a visual masking method that hides the input image both spatially and temporally. Our masking approach consists of temporal and spatial pixel by pixel variations of the frequency band coefficients representing the image to be hidden. These variations ensure that the target image remains invisible both in the spatial and the temporal domains. In addition, by applying a temporal masking function derived from a dither matrix, we allow the video to carry a visible message that is different from the hidden image. The image hidden in the video can be revealed by software averaging, or with a camera, by long exposure photography. The presented work may find applications in the secure transmission of digital information

    Feature-based affine-invariant detection and localization of faces

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Reflective-Physically Unclonable Function based System for Anti-Counterfeiting

    Get PDF
    Physically unclonable functions (PUF) are physical security mechanisms, which utilize inherent randomness in processes used to instantiate physical objects. In this dissertation, an extensive overview of the state of the art in implementations, accompanying definitions and their analysis is provided. The concept of the reflective-PUF is presented as a product security solution. The viability of the concept, its evaluation and the requirements of such a system is explored

    Towards On-line Domain-Independent Big Data Learning: Novel Theories and Applications

    Get PDF
    Feature extraction is an extremely important pre-processing step to pattern recognition, and machine learning problems. This thesis highlights how one can best extract features from the data in an exhaustively online and purely adaptive manner. The solution to this problem is given for both labeled and unlabeled datasets, by presenting a number of novel on-line learning approaches. Specifically, the differential equation method for solving the generalized eigenvalue problem is used to derive a number of novel machine learning and feature extraction algorithms. The incremental eigen-solution method is used to derive a novel incremental extension of linear discriminant analysis (LDA). Further the proposed incremental version is combined with extreme learning machine (ELM) in which the ELM is used as a preprocessor before learning. In this first key contribution, the dynamic random expansion characteristic of ELM is combined with the proposed incremental LDA technique, and shown to offer a significant improvement in maximizing the discrimination between points in two different classes, while minimizing the distance within each class, in comparison with other standard state-of-the-art incremental and batch techniques. In the second contribution, the differential equation method for solving the generalized eigenvalue problem is used to derive a novel state-of-the-art purely incremental version of slow feature analysis (SLA) algorithm, termed the generalized eigenvalue based slow feature analysis (GENEIGSFA) technique. Further the time series expansion of echo state network (ESN) and radial basis functions (EBF) are used as a pre-processor before learning. In addition, the higher order derivatives are used as a smoothing constraint in the output signal. Finally, an online extension of the generalized eigenvalue problem, derived from James Stone’s criterion, is tested, evaluated and compared with the standard batch version of the slow feature analysis technique, to demonstrate its comparative effectiveness. In the third contribution, light-weight extensions of the statistical technique known as canonical correlation analysis (CCA) for both twinned and multiple data streams, are derived by using the same existing method of solving the generalized eigenvalue problem. Further the proposed method is enhanced by maximizing the covariance between data streams while simultaneously maximizing the rate of change of variances within each data stream. A recurrent set of connections used by ESN are used as a pre-processor between the inputs and the canonical projections in order to capture shared temporal information in two or more data streams. A solution to the problem of identifying a low dimensional manifold on a high dimensional dataspace is then presented in an incremental and adaptive manner. Finally, an online locally optimized extension of Laplacian Eigenmaps is derived termed the generalized incremental laplacian eigenmaps technique (GENILE). Apart from exploiting the benefit of the incremental nature of the proposed manifold based dimensionality reduction technique, most of the time the projections produced by this method are shown to produce a better classification accuracy in comparison with standard batch versions of these techniques - on both artificial and real datasets

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Modelling, Dimensioning and Optimization of 5G Communication Networks, Resources and Services

    Get PDF
    This reprint aims to collect state-of-the-art research contributions that address challenges in the emerging 5G networks design, dimensioning and optimization. Designing, dimensioning and optimization of communication networks resources and services have been an inseparable part of telecom network development. The latter must convey a large volume of traffic, providing service to traffic streams with highly differentiated requirements in terms of bit-rate and service time, required quality of service and quality of experience parameters. Such a communication infrastructure presents many important challenges, such as the study of necessary multi-layer cooperation, new protocols, performance evaluation of different network parts, low layer network design, network management and security issues, and new technologies in general, which will be discussed in this book

    Digital Watermarking for Verification of Perception-based Integrity of Audio Data

    Get PDF
    In certain application fields digital audio recordings contain sensitive content. Examples are historical archival material in public archives that preserve our cultural heritage, or digital evidence in the context of law enforcement and civil proceedings. Because of the powerful capabilities of modern editing tools for multimedia such material is vulnerable to doctoring of the content and forgery of its origin with malicious intent. Also inadvertent data modification and mistaken origin can be caused by human error. Hence, the credibility and provenience in terms of an unadulterated and genuine state of such audio content and the confidence about its origin are critical factors. To address this issue, this PhD thesis proposes a mechanism for verifying the integrity and authenticity of digital sound recordings. It is designed and implemented to be insensitive to common post-processing operations of the audio data that influence the subjective acoustic perception only marginally (if at all). Examples of such operations include lossy compression that maintains a high sound quality of the audio media, or lossless format conversions. It is the objective to avoid de facto false alarms that would be expectedly observable in standard crypto-based authentication protocols in the presence of these legitimate post-processing. For achieving this, a feasible combination of the techniques of digital watermarking and audio-specific hashing is investigated. At first, a suitable secret-key dependent audio hashing algorithm is developed. It incorporates and enhances so-called audio fingerprinting technology from the state of the art in contentbased audio identification. The presented algorithm (denoted as ”rMAC” message authentication code) allows ”perception-based” verification of integrity. This means classifying integrity breaches as such not before they become audible. As another objective, this rMAC is embedded and stored silently inside the audio media by means of audio watermarking technology. This approach allows maintaining the authentication code across the above-mentioned admissible post-processing operations and making it available for integrity verification at a later date. For this, an existent secret-key ependent audio watermarking algorithm is used and enhanced in this thesis work. To some extent, the dependency of the rMAC and of the watermarking processing from a secret key also allows authenticating the origin of a protected audio. To elaborate on this security aspect, this work also estimates the brute-force efforts of an adversary attacking this combined rMAC-watermarking approach. The experimental results show that the proposed method provides a good distinction and classification performance of authentic versus doctored audio content. It also allows the temporal localization of audible data modification within a protected audio file. The experimental evaluation finally provides recommendations about technical configuration settings of the combined watermarking-hashing approach. Beyond the main topic of perception-based data integrity and data authenticity for audio, this PhD work provides new general findings in the fields of audio fingerprinting and digital watermarking. The main contributions of this PhD were published and presented mainly at conferences about multimedia security. These publications were cited by a number of other authors and hence had some impact on their works

    Beyond the noise : high fidelity MR signal processing

    Get PDF
    This thesis describes a variety of methods developed to increase the sensitivity and resolution of liquid state nuclear magnetic resonance (NMR) experiments. NMR is known as one of the most versatile non-invasive analytical techniques yet often suffers from low sensitivity. The main contribution to this low sensitivity issue is a presence of noise and level of noise in the spectrum is expressed numerically as “signal-to-noise ratio”. NMR signal processing involves sensitivity and resolution enhancement achieved by noise reduction using mathematical algorithms. A singular value decomposition based reduced rank matrix method, composite property mapping, in particular is studied extensively in this thesis to present its advantages, limitations, and applications. In theory, when the sum of k noiseless sinusoidal decays is formatted into a specific matrix form (i.e., Toeplitz), the matrix is known to possess k linearly independent columns. This information becomes apparent only after a singular value decomposition of the matrix. Singular value decomposition factorises the large matrix into three smaller submatrices: right and left singular vector matrices, and one diagonal matrix containing singular values. Were k noiseless sinusoidal decays involved, there would be only k nonzero singular values appearing in the diagonal matrix in descending order providing the information of the amplitude of each sinusoidal decay. The number of non-zero singular values or the number of linearly independent columns is known as the rank of the matrix. With real NMR data none of the singular values equals zero and the matrix has full rank. The reduction of the rank of the matrix and thus the noise in the reconstructed NMR data can be achieved by replacing all the singular values except the first k values with zeroes. This noise reduction process becomes difficult when biomolecular NMR data is to be processed due to the number of resonances being unknown and the presence of a large solvent peak
    corecore