164 research outputs found

    Investigating Low-complexity Architectural Issues under UBSS

    Get PDF
    Our Project aim is to develop a real time chip to process the sensor signals and separating the source signals, which is used in Health care like Autism. Autism is a disease which aects the child mental behavior. So If we analyze the signals form the brain so we can observe the how eectively the disease is cured. So to analyze the Autism we need EEG signals from almost 128 Leads from the scalp of child, which is dicult to do so. Thus we have to reduce the number of Leads used and at the same time we should get the all information as in the case of 128-Leads. Thus solving our problem is to solve Underdetermined Blind Source Separation (UBSS). And in some other cases we may have only one mixture signal (M=1), which is extreme case of UBSS, from which we have to extract the unknown sources, which is called Single channel Independent Component Analysis also called SCICA. In SCICA if we have N source signals then it is called ND-SCICA. In real time UBSS or SCICA problem we require a Digital chip which will separate the sources in real time case. So we require a chip which is High speed so that it will be suitable for real time applications and also it should be Recongurable so that it can work for dierent type of applications where the frame length of signals vary. So rst we investigated the architectural issues of Recongurable Discrete Hilbert Transform for UBSS where M is greater than one. Thus we proposed a high-speed and recongurable Discrete Hilbert Transform architecture design methodology targeting the real-time applications including Cyber-Physical systems, Internet of Things or Remote Health-Monitoring where the same chip-set needs to be used for various pur- poses under real-time scenario. By using this architecture we are able to get Discrete Hilbert Transform for any given M-point by re-using N-point Discrete Hilbert Trans- form as a kernel. Here N and M are multiple of 4 and N respectively. Subsequently we provide the architecture design details and compare the proposed architecture with the conventional state-of-the-art architecture. Thorough theoretical analysis and ex- vi perimental comparison results show that the proposed design is twice as fast and recongurability is also achieved simultaneously. After DHT, we proposed a new algorithm for ND-FastICA which is used for ex- treme case of UBSS where the number of mixture/sensor signals are only one. In this algorithm we used CORDIC based ND-FastICA which is recongurable so that the same chip can be used for dierent dimensioned FastICA

    Adaptive signal processing algorithms for noncircular complex data

    No full text
    The complex domain provides a natural processing framework for a large class of signals encountered in communications, radar, biomedical engineering and renewable energy. Statistical signal processing in C has traditionally been viewed as a straightforward extension of the corresponding algorithms in the real domain R, however, recent developments in augmented complex statistics show that, in general, this leads to under-modelling. This direct treatment of complex-valued signals has led to advances in so called widely linear modelling and the introduction of a generalised framework for the differentiability of both analytic and non-analytic complex and quaternion functions. In this thesis, supervised and blind complex adaptive algorithms capable of processing the generality of complex and quaternion signals (both circular and noncircular) in both noise-free and noisy environments are developed; their usefulness in real-world applications is demonstrated through case studies. The focus of this thesis is on the use of augmented statistics and widely linear modelling. The standard complex least mean square (CLMS) algorithm is extended to perform optimally for the generality of complex-valued signals, and is shown to outperform the CLMS algorithm. Next, extraction of latent complex-valued signals from large mixtures is addressed. This is achieved by developing several classes of complex blind source extraction algorithms based on fundamental signal properties such as smoothness, predictability and degree of Gaussianity, with the analysis of the existence and uniqueness of the solutions also provided. These algorithms are shown to facilitate real-time applications, such as those in brain computer interfacing (BCI). Due to their modified cost functions and the widely linear mixing model, this class of algorithms perform well in both noise-free and noisy environments. Next, based on a widely linear quaternion model, the FastICA algorithm is extended to the quaternion domain to provide separation of the generality of quaternion signals. The enhanced performances of the widely linear algorithms are illustrated in renewable energy and biomedical applications, in particular, for the prediction of wind profiles and extraction of artifacts from EEG recordings

    ID Photograph hashing : a global approach

    No full text
    This thesis addresses the question of the authenticity of identity photographs, part of the documents required in controlled access. Since sophisticated means of reproduction are publicly available, new methods / techniques should prevent tampering and unauthorized reproduction of the photograph. This thesis proposes a hashing method for the authentication of the identity photographs, robust to print-and-scan. This study focuses also on the effects of digitization at hash level. The developed algorithm performs a dimension reduction, based on independent component analysis (ICA). In the learning stage, the subspace projection is obtained by applying ICA and then reduced according to an original entropic selection strategy. In the extraction stage, the coefficients obtained after projecting the identity image on the subspace are quantified and binarized to obtain the hash value. The study reveals the effects of the scanning noise on the hash values of the identity photographs and shows that the proposed method is robust to the print-and-scan attack. The approach focusing on robust hashing of a restricted class of images (identity) differs from classical approaches that address any imageCette thèse traite de la question de l’authenticité des photographies d’identité, partie intégrante des documents nécessaires lors d’un contrôle d’accès. Alors que les moyens de reproduction sophistiqués sont accessibles au grand public, de nouvelles méthodes / techniques doivent empêcher toute falsification / reproduction non autorisée de la photographie d’identité. Cette thèse propose une méthode de hachage pour l’authentification de photographies d’identité, robuste à l’impression-lecture. Ce travail met ainsi l’accent sur les effets de la numérisation au niveau de hachage. L’algorithme mis au point procède à une réduction de dimension, basée sur l’analyse en composantes indépendantes (ICA). Dans la phase d’apprentissage, le sous-espace de projection est obtenu en appliquant l’ICA puis réduit selon une stratégie de sélection entropique originale. Dans l’étape d’extraction, les coefficients obtenus après projection de l’image d’identité sur le sous-espace sont quantifiés et binarisés pour obtenir la valeur de hachage. L’étude révèle les effets du bruit de balayage intervenant lors de la numérisation des photographies d’identité sur les valeurs de hachage et montre que la méthode proposée est robuste à l’attaque d’impression-lecture. L’approche suivie en se focalisant sur le hachage robuste d’une classe restreinte d’images (d’identité) se distingue des approches classiques qui adressent une image quelconqu

    Multimodal methods for blind source separation of audio sources

    Get PDF
    The enhancement of the performance of frequency domain convolutive blind source separation (FDCBSS) techniques when applied to the problem of separating audio sources recorded in a room environment is the focus of this thesis. This challenging application is termed the cocktail party problem and the ultimate aim would be to build a machine which matches the ability of a human being to solve this task. Human beings exploit both their eyes and their ears in solving this task and hence they adopt a multimodal approach, i.e. they exploit both audio and video modalities. New multimodal methods for blind source separation of audio sources are therefore proposed in this work as a step towards realizing such a machine. The geometry of the room environment is initially exploited to improve the separation performance of a FDCBSS algorithm. The positions of the human speakers are monitored by video cameras and this information is incorporated within the FDCBSS algorithm in the form of constraints added to the underlying cross-power spectral density matrix-based cost function which measures separation performance. [Continues.

    Extensions of independent component analysis for natural image data

    Get PDF
    An understanding of the statistical properties of natural images is useful for any kind of processing to be performed on them. Natural image statistics are, however, in many ways as complex as the world which they depict. Fortunately, the dominant low-level statistics of images are sufficient for many different image processing goals. A lot of research has been devoted to second order statistics of natural images over the years. Independent component analysis is a statistical tool for analyzing higher than second order statistics of data sets. It attempts to describe the observed data as a linear combination of independent, latent sources. Despite its simplicity, it has provided valuable insights of many types of natural data. With natural image data, it gives a sparse basis useful for efficient description of the data. Connections between this description and early mammalian visual processing have been noticed. The main focus of this work is to extend the known results of applying independent component analysis on natural images. We explore different imaging techniques, develop algorithms for overcomplete cases, and study the dependencies between the components by using a model that finds a topographic ordering for the components as well as by conditioning the statistics of a component on the activity of another. An overview is provided of the associated problem field, and it is discussed how these relatively small results may eventually be a part of a more complete solution to the problem of vision.reviewe

    Independent component analysis: algorithms and applications

    Get PDF
    A fundamental problem in neural network research, as well as in many other disciplines, is finding a suitable representation of multivariate data, i.e. random vectors. For reasons of computational and conceptual simplicity, the representation is often sought as a linear transformation of the original data. In other words, each component of the representation is a linear combination of the original variables. Well-known linear transformation methods include principal component analysis, factor analysis, and projection pursuit. Independent component analysis (ICA) is a recently developed method in which the goal is to find a linear representation of nongaussian data so that the components are statistically independent, or as independent as possible. Such a representation seems to capture the essential structure of the data in many applications, including feature extraction and signal separation. In this paper, we present the basic theory and applications of ICA, and our recent work on the subject

    Camera-Based Heart Rate Extraction in Noisy Environments

    Get PDF
    Remote photoplethysmography (rPPG) is a non-invasive technique that benefits from video to measure vital signs such as the heart rate (HR). In rPPG estimation, noise can introduce artifacts that distort rPPG signal and jeopardize accurate HR measurement. Considering that most rPPG studies occurred in lab-controlled environments, the issue of noise in realistic conditions remains open. This thesis aims to examine the challenges of noise in rPPG estimation in realistic scenarios, specifically investigating the effect of noise arising from illumination variation and motion artifacts on the predicted rPPG HR. To mitigate the impact of noise, a modular rPPG measurement framework, comprising data preprocessing, region of interest, signal extraction, preparation, processing, and HR extraction is developed. The proposed pipeline is tested on the LGI-PPGI-Face-Video-Database public dataset, hosting four different candidates and real-life scenarios. In the RoI module, raw rPPG signals were extracted from the dataset using three machine learning-based face detectors, namely Haarcascade, Dlib, and MediaPipe, in parallel. Subsequently, the collected signals underwent preprocessing, independent component analysis, denoising, and frequency domain conversion for peak detection. Overall, the Dlib face detector leads to the most successful HR for the majority of scenarios. In 50% of all scenarios and candidates, the average predicted HR for Dlib is either in line or very close to the average reference HR. The extracted HRs from the Haarcascade and MediaPipe architectures make up 31.25% and 18.75% of plausible results, respectively. The analysis highlighted the importance of fixated facial landmarks in collecting quality raw data and reducing noise

    Blind Source Separation for the Processing of Contact-Less Biosignals

    Get PDF
    (Spatio-temporale) Blind Source Separation (BSS) eignet sich für die Verarbeitung von Multikanal-Messungen im Bereich der kontaktlosen Biosignalerfassung. Ziel der BSS ist dabei die Trennung von (z.B. kardialen) Nutzsignalen und Störsignalen typisch für die kontaktlosen Messtechniken. Das Potential der BSS kann praktisch nur ausgeschöpft werden, wenn (1) ein geeignetes BSS-Modell verwendet wird, welches der Komplexität der Multikanal-Messung gerecht wird und (2) die unbestimmte Permutation unter den BSS-Ausgangssignalen gelöst wird, d.h. das Nutzsignal praktisch automatisiert identifiziert werden kann. Die vorliegende Arbeit entwirft ein Framework, mit dessen Hilfe die Effizienz von BSS-Algorithmen im Kontext des kamera-basierten Photoplethysmogramms bewertet werden kann. Empfehlungen zur Auswahl bestimmter Algorithmen im Zusammenhang mit spezifischen Signal-Charakteristiken werden abgeleitet. Außerdem werden im Rahmen der Arbeit Konzepte für die automatisierte Kanalauswahl nach BSS im Bereich der kontaktlosen Messung des Elektrokardiogramms entwickelt und bewertet. Neuartige Algorithmen basierend auf Sparse Coding erwiesen sich dabei als besonders effizient im Vergleich zu Standard-Methoden.(Spatio-temporal) Blind Source Separation (BSS) provides a large potential to process distorted multichannel biosignal measurements in the context of novel contact-less recording techniques for separating distortions from the cardiac signal of interest. This potential can only be practically utilized (1) if a BSS model is applied that matches the complexity of the measurement, i.e. the signal mixture and (2) if permutation indeterminacy is solved among the BSS output components, i.e the component of interest can be practically selected. The present work, first, designs a framework to assess the efficacy of BSS algorithms in the context of the camera-based photoplethysmogram (cbPPG) and characterizes multiple BSS algorithms, accordingly. Algorithm selection recommendations for certain mixture characteristics are derived. Second, the present work develops and evaluates concepts to solve permutation indeterminacy for BSS outputs of contact-less electrocardiogram (ECG) recordings. The novel approach based on sparse coding is shown to outperform the existing concepts of higher order moments and frequency-domain features
    corecore