3,719 research outputs found

    Removing interference components in time frequency representations using morphological operators

    Full text link
    Time-frequency representations have been of great interest in the analysis and classification of non-stationary signals. The use of highly selective transformation techniques is a valuable tool for obtaining accurate information for studies of this type. The Wigner-Ville distribution has high time and frequency selectivity in addition to meeting some interesting mathematical properties. However, due to the bi-linearity of the transform, interference terms emerge when the transform is applied over multi-component signals. In this paper, we propose a technique to remove cross-components from the Wigner-Ville transform using image processing algorithms. The proposed method exploits the advantages of non-linear morphological filters, using a spectrogram to obtain an adequate marker for the morphological processing of the Wigner-Ville transform. Unlike traditional smoothing techniques, this algorithm provides cross-term attenuations while preserving time-frequency resolutions. Moreover, it could also be applied to distributions with different interference geometries. The method has been applied to a set of different time-frequency transforms, with promising results. © 2011 Elsevier Inc. All rights reserved.This work was supported by the National R&D Program under Grant TEC2008-02975 (Spain), FEDER programme and Generalitat Valenciana CMAP 340.Gómez García, S.; Naranjo Ornedo, V.; Miralles Ricós, R. (2011). Removing interference components in time frequency representations using morphological operators. Journal of Visual Communication and Image Representation. 22(1):401-410. doi:10.1016/j.jvcir.2011.03.007S40141022

    Convexity in source separation: Models, geometry, and algorithms

    Get PDF
    Source separation or demixing is the process of extracting multiple components entangled within a signal. Contemporary signal processing presents a host of difficult source separation problems, from interference cancellation to background subtraction, blind deconvolution, and even dictionary learning. Despite the recent progress in each of these applications, advances in high-throughput sensor technology place demixing algorithms under pressure to accommodate extremely high-dimensional signals, separate an ever larger number of sources, and cope with more sophisticated signal and mixing models. These difficulties are exacerbated by the need for real-time action in automated decision-making systems. Recent advances in convex optimization provide a simple framework for efficiently solving numerous difficult demixing problems. This article provides an overview of the emerging field, explains the theory that governs the underlying procedures, and surveys algorithms that solve them efficiently. We aim to equip practitioners with a toolkit for constructing their own demixing algorithms that work, as well as concrete intuition for why they work

    Fish detection automation from ARIS and DIDSON SONAR data

    Get PDF
    Abstract. The goal of this thesis is to analyse SONAR files produced by ARIS and DIDSON manufactured by Sound Metrics Co. which are ultrasonic, monostatic and multibeam echo-sounders. They are used to capture the behaviour of Atlantic salmon, which recently has been on the lists of endangered species. These SONARs can work in dark lighting conditions and provide high resolution images due to their high frequencies that ranges from 1.1 MHz to 1.8 MHz. The thesis goes through extracting data from file, redrawing it, and visualising it in human friendly format. Next, images are analysed to search for fish. Results of analysis are saved in formats such as JSON, to allow harmony with other legacy systems. Also the output helps in future development due to the support for JSON in multitude of programming languages. Eventually, a user-friendly user interface is introduced, which helps making the process easier. The software is tested against data-sets from rivers in Finland, that are rich in Atlantic salmon

    Abnormal ECG search in long-term electrocardiographic recordings from an animal model of heart failure

    Get PDF
    Heart failure is one of the leading causes of death in the United States. Five million Americans suffer from heart failure. Advances in portable electrocardiogram (ECG) monitoring systems and large data storage space allow the ECG to be recorded continuously for long periods. Long-term monitoring could potentially lead to better diagnosis and treatment if the progression of heart failure could be followed. The challenge is to analyze the sheer mass of data. Manual analysis using the classical methods is impossible. In this dissertation, a framework for analysis of long-term ECG recording and methods for searching an abnormal ECG are presented.;The data used in this research were collected from an animal model of heart failure. Chronic heart failure was gradually induced in rats by aldosterone infusion and a high Na and low Mg diet. The ECG was continuously recorded during the experimental period of 11-12 weeks through radiotelemetry. The ECG leads were placed subcutaneously in lead-II configuration. In the end, there were 80 GB of data from five animals. Besides the massive amount of data, noise and artifacts also caused problems in the analysis.;The framework includes data preparation, ECG beat detection, EMG noise detection, baseline fluctuation removal, ECG template generation, feature extraction, and abnormal ECG search. The raw data was converted from its original format and stored in a database for data retrieval. The beat detection technique was improved from the original algorithm so that it was less sensitive to signal baseline jump and more sensitive to beat size variation. A method for estimating a parameter required for baseline fluctuation removal is proposed. It provides a good result on test signals. A new algorithm for EMG noise detection was developed using morphological filters and moving variance. The resulting sensitivity and specificity are 94% and 100%, respectively. A procedure for ECG template generation was proposed to capture gradual change in ECG morphology and manage the matching process if numerous ECG templates are created. RR intervals and heart rate variability parameters are extracted and plotted to display progressive changes as heart failure develops. In the abnormal ECG search, premature ventricular complexes, elevated ST segment, and split-R-wave ECG are considered. New features are extracted from ECG morphology. The Fisher linear discriminant analysis is used to classify the normal and abnormal ECG. The results provide classification rate, sensitivity, and specificity of 97.35%, 96.02%, and 98.91%, respectively

    Final Research Report on Auto-Tagging of Music

    Get PDF
    The deliverable D4.7 concerns the work achieved by IRCAM until M36 for the “auto-tagging of music”. The deliverable is a research report. The software libraries resulting from the research have been integrated into Fincons/HearDis! Music Library Manager or are used by TU Berlin. The final software libraries are described in D4.5. The research work on auto-tagging has concentrated on four aspects: 1) Further improving IRCAM’s machine-learning system ircamclass. This has been done by developing the new MASSS audio features, including audio augmentation and audio segmentation into ircamclass. The system has then been applied to train HearDis! “soft” features (Vocals-1, Vocals-2, Pop-Appeal, Intensity, Instrumentation, Timbre, Genre, Style). This is described in Part 3. 2) Developing two sets of “hard” features (i.e. related to musical or musicological concepts) as specified by HearDis! (for integration into Fincons/HearDis! Music Library Manager) and TU Berlin (as input for the prediction model of the GMBI attributes). Such features are either derived from previously estimated higher-level concepts (such as structure, key or succession of chords) or by developing new signal processing algorithm (such as HPSS) or main melody estimation. This is described in Part 4. 3) Developing audio features to characterize the audio quality of a music track. The goal is to describe the quality of the audio independently of its apparent encoding. This is then used to estimate audio degradation or music decade. This is to be used to ensure that playlists contain tracks with similar audio quality. This is described in Part 5. 4) Developing innovative algorithms to extract specific audio features to improve music mixes. So far, innovative techniques (based on various Blind Audio Source Separation algorithms and Convolutional Neural Network) have been developed for singing voice separation, singing voice segmentation, music structure boundaries estimation, and DJ cue-region estimation. This is described in Part 6.EC/H2020/688122/EU/Artist-to-Business-to-Business-to-Consumer Audio Branding System/ABC D

    A Quantitative Measure of Mono-Componentness for Time-Frequency Analysis

    Get PDF
    Joint time-frequency (TF) analysis is an ideal method for analyzing non-stationary signals, but is challenging to use leading to it often being neglected. The exceptions being the short-time Fourier transform (STFT) and spectrogram. Even then, the inability to have simultaneously high time and frequency resolution is a frustrating issue with the STFT and spectrogram. However, there is a family of joint TF analysis techniques that do have simultaneously high time and frequency resolution – the quadratic TF distribution (QTFD) family. Unfortunately, QTFDs are often more troublesome than beneficial. The issue is interference/cross-terms that causes these methods to become so difficult to use. They require that the “proper” joint distribution be selected based on information that is typically unavailable for real-world signals. However, QTFDs do not produce cross-terms when applied to a mono-component signal. Clearly, determining the mono-componentness of a signal provides a key piece of information. However, until now, the means for determining if a signal is a monocomponent or a multi-component has been to choose a QTFD, generate the TF representation (TFR), and visually examine it. The work presented here provides a method for quantitatively determining if a signal is a mono-component. This new capability provides an important step towards finally allowing QTFDs to be used on multi-component signals, while producing few to no interference terms through enabling the use of the quadratic superposition property. The focus of this work is on establishing the legitimacy for “measuring” mono-componentness along with its algorithmic implementation. Several applications are presented, such as quantifying the quality of the decomposition results produced by the blind decomposition algorithm, Empirical Mode Decomposition (EMD). The mono-componentness measure not only provides an objective means to validate the outcome of a decomposition algorithm, it also provides a practical, quantitative metric for their comparison. More importantly, this quantitative measurement encapsulates mono-componentness in a form which can actually be incorporated in the design of decomposition algorithms as a viable condition/constraint so that true mono-components could be extracted. Incorporating the mono-component measure into a decomposition algorithm will eventually allow interference free TFRs to be calculated from multi-component signals without requiring prior knowledge

    Development of Advanced Mathematical Morphology Algorithms and their Application to the Detection of Disturbances in Power Systems

    Get PDF
    This thesis is concerned with the development of Mathematical morphology (MM)-based algorithms and their applications to signal processing in power systems, including typical power quality disturbances such as low frequency oscillations (LFO) and harmonics. Traditional morphological operators are extended to advanced ones in the thesis, including multi-resolution morphological gradient (MMG) algorithms, envelope extraction morphological filters (MF), LFO extraction MF and convolved morphological filters (CMF). These advanced morphological operators are applied to the detection and classification of power disturbances, detection of continuous and damped LFO, and the detection and removal of harmonics in power systems

    Multi-scale texture segmentation of synthetic aperture radar images

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore