66 research outputs found

    Blind source separation for clutter and noise suppression in ultrasound imaging:review for different applications

    Get PDF
    Blind source separation (BSS) refers to a number of signal processing techniques that decompose a signal into several 'source' signals. In recent years, BSS is increasingly employed for the suppression of clutter and noise in ultrasonic imaging. In particular, its ability to separate sources based on measures of independence rather than their temporal or spatial frequency content makes BSS a powerful filtering tool for data in which the desired and undesired signals overlap in the spectral domain. The purpose of this work was to review the existing BSS methods and their potential in ultrasound imaging. Furthermore, we tested and compared the effectiveness of these techniques in the field of contrast-ultrasound super-resolution, contrast quantification, and speckle tracking. For all applications, this was done in silico, in vitro, and in vivo. We found that the critical step in BSS filtering is the identification of components containing the desired signal and highlighted the value of a priori domain knowledge to define effective criteria for signal component selection

    Improved frequency domain decomposition and stochastic subspace identification algorithms for operational modal analysis

    Get PDF
    The accuracy of the estimated modal damping ratios in operational modal analysis (OMA) remains an open issue and is often characterized by a large error. The modal damping ratio is considered to be a good practical parameter for structural damage detection due to its sensitivity and sufficient responsiveness to damage compared to natural frequency and mode shape. Therefore, an accurate estimate of the modal damping ratio will assist in developing an effective modal-based structural damage detection approach. The objective of this research focuses on improvements of frequency domain decomposition (FDD) and stochastic subspace identification (SSI) algorithms, particularly in estimating modal damping ratio. These methods have gained a lot of attention and interest compared to other OMA methods due to their ability in estimating modal parameters. However, FDD has a problem dealing with high damping levels, while SSI has difficulty in handling harmonic components. This will cause a large error in estimating the modal damping ratio. Difficulties also arise for automation of SSI as several predefined set parameters are compulsory at start-up for each analysis. This study introduces an iterative loop of advanced optimization to enhance the capabilities of classical FDD algorithm by optimizing the value of the modal assurance criterion (MAC) index and the selection of the correct time window on the auto-correlation function that represents the most challenging part of the algorithms. This study also presents the development of the SSI framework in automated OMA and harmonic removal method using image-based feature extraction along with the application of empirical mode decomposition. The implementation of image-based feature extraction can be used for clustering and classification of harmonic components from structural poles as well as to identify modal parameters by neglecting any calibration or user-defined parameter at start-up. The proposed approach is assessed through experimental and numerical simulation analysis. Based on the numerical simulation results, the proposed optimized FDD can estimate modal damping ratio with high accuracy and consistency by showing average percentage deviation (error) below 5.50% compared to classical FDD and benchmark approach, which is a refined FDD. Errors in classical FDD can reach an average of up to 15%, whereas for refined FDD the average is around 10%. Meanwhile, the results of the proposed approach in experimental verification show a reasonable average percentage deviation of about 5.75%, while the classical FDD algorithm is overestimated which averages about 29% in all cases. For the proposed automation of SSI, the estimated results of modal damping ratio in the numerical simulation are below 2.5% of the average error compared to other SSI methods which on average exceed 3.2%. For experimental verification, the results of the proposed approach indicate very satisfactory agreement by showing average deviation percentage below 4.20% compared to other SSI methods which on average exceeds 14%. Furthermore, the results of the proposed automated harmonic removal in SSI framework for estimating modal damping ratio using existing online experimental data sets demonstrate very high accuracy and consistent results after removing harmonic components, showing an average deviation percentage of below 7.22% compared to orthogonal projection and smoothing technique based on linear interpolation approaches where the average deviation percentage exceeds 9%

    Comparison of blind source separation methods in fast somatosensory-evoked potential detection

    Get PDF
    Blind source separation (BSS) is a promising method for extracting somatosensory-evoked potential (SEP). Although various BSS algorithms are available for SEP extraction, few studies have addressed the performance differences between them. In this study, we compared the performance of a number of typical BSS algorithms on SEP extraction from both computer simulations and clinical experiment. The algorithms we compared included second-order blind identification, estimation of signal parameters via rotation invariance technique, algorithm for multiple unknown signals extraction, joint approximate diagonalization of eigenmatrices, extended infomax, and fast independent component analysis. The performances of these BSS algorithms were determined by the correlation coefficients between the true and the extracted SEP signals. There were significant differences in the performances of the various BSS algorithms in a simulation study. In summary, second-order blind identification using six covariance matrix denoting SOBI6 was recommended as the most appropriate BSS method for fast SEP extraction from noisy backgrounds. Copyright © 2011 by the American Clinical Neurophysiology Society.postprin

    Deterministic continutation of stochastic metastable equilibria via Lyapunov equations and ellipsoids

    Full text link
    Numerical continuation methods for deterministic dynamical systems have been one of the most successful tools in applied dynamical systems theory. Continuation techniques have been employed in all branches of the natural sciences as well as in engineering to analyze ordinary, partial and delay differential equations. Here we show that the deterministic continuation algorithm for equilibrium points can be extended to track information about metastable equilibrium points of stochastic differential equations (SDEs). We stress that we do not develop a new technical tool but that we combine results and methods from probability theory, dynamical systems, numerical analysis, optimization and control theory into an algorithm that augments classical equilibrium continuation methods. In particular, we use ellipsoids defining regions of high concentration of sample paths. It is shown that these ellipsoids and the distances between them can be efficiently calculated using iterative methods that take advantage of the numerical continuation framework. We apply our method to a bistable neural competition model and a classical predator-prey system. Furthermore, we show how global assumptions on the flow can be incorporated - if they are available - by relating numerical continuation, Kramers' formula and Rayleigh iteration.Comment: 29 pages, 7 figures [Fig.7 reduced in quality due to arXiv size restrictions]; v2 - added Section 9 on Kramers' formula, additional computations, corrected typos, improved explanation

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Separação cega de sinais: análise comparativa entre algoritmos

    Get PDF
    Nowadays the blind signal processing is one of the areas of greater highlight in the signal processing. The signal processing techniques do not make use of any training sequence nor any information on the mixture of the system to which the signals are subjected to; being the blind separation one of the main areas of the blind processing. The blind separation or the blind signal (source) separation problem consists of retrieval a set of unknown signals or sources by the observations done by sensors of mixture of this signals. It’s no shadow of doubt, a problem of great interest in the signal processing area, once to be solved it is necessary that a set of hypotheses a bit restrictive be carried out. Being that, the blind source separation techniques run across countless applications: data set processing; multi-users communications; voice and image recognition; biomedical signal processing. By means of separation techniques, one can, therefore, retrieve one or all the sources just basing on the information on observations or measurement done by the set of sensors. The blind adjective was incorporated to characterize the lack of information inherent in the separation process. To fulfill this lack of information some properties of the sources nature, mixture, and noise are taken into account for separation process. This way, the algorithms of blind separation try and restore at the exit of separation system one property known by the sources. One of the main tools used to solve the blind separation problem has been the Independent Component Analysis. It is important to mention that the blind separation and the Independent Component Analysis terms are often mixed up or used like synonym, although they refer to a similar or equal pattern and are solved with similar or equal algorithms, under the restriction on the original sources be statistically independent. However, mainly in real problems, the independent component analysis and blind separation goals are a bit different: the blind separation goal is to estimate the original signals even if they are not completely independent; whereas, the independent component analysis goal is to determine one transformation that ensures the estimate signals are as independent as possible. One may yet observe that the independent component analysis methods use, in most cases, statistics of superior order, while the blind separation methods are apt to use just statistics of second order. Based on the above-mentioned considerations, this dissertation presents one state-of-art review of the main techniques that deal with the separation problem; by means of comparison of three algorithms: AMUSE, JADE and FLEXICA that were compared through the application of them into test signals, telecommunication signals and a real world biomedical signal.A separação cega ou o problema da separação de sinais (fontes) consiste na recuperação de um conjunto de sinais ou fontes desconhecidos a partir de observações feitas por sensores das misturas destes sinais. Este é, sem dúvida, um problema de grande interesse dentro da área de processamento de sinais, uma vez que para ser solucionado é necessário que se cumpram um conjunto de hipóteses pouco restritivas. Assim sendo, as técnicas de separação cega de fontes encontram inúmeras aplicações: processamento de conjunto de dados; comunicações multiusuários; reconhecimento de voz e imagem; processamento de sinais biomédicos. Através das técnicas de separação, pode-se, portanto, recuperar uma ou todas as fontes com base apenas nas informações nas observações ou medidas feitas por um conjunto de sensores. O adjetivo cego foi incorporado para caracterizar a falta de informação inerente ao processo de separação. Para suprir esta falta de informação são levadas em consideração no processo de separação algumas propriedades sobre a natureza das fontes, da mistura e sobre o ruído adicionado ao processo. Assim, os algoritmos para separação cega procuram restaurar na saída do sistema de separação uma propriedade conhecida das fontes. Uma das principais ferramentas utilizadas para solucionar o problema da separação cega tem sido a Análise de Componentes Independentes. É importante mencionar que os termos separação cega e Análise de Componente Independentes são freqüentemente confundidos ou trocados, embora eles se refiram a um modelo similar ou igual e sejam resolvidos com algoritmos similares ou iguais, sob a restrição de que as fontes originais são estatisticamente independentes. Entretanto, principalmente em problemas reais, os objetivos da Análise de Componentes Independentes e da separação cega são um pouco diferentes: o objetivo da separação cega é estimar os sinais originais mesmo se eles não forem completamente independentes; por sua vez, o objetivo da análise de componentes independentes é determinar uma transformação que assegure que os sinais estimados sejam tão independentes quanto possível. Deve-se notar ainda que os métodos para análise de componentes independentes utilizam, na maioria dos casos, estatísticas de ordem superior, enquanto que os métodos para separação cega são aptos a utilizar somente estatísticas de segunda ordem. Com base nas considerações supracitadas, esta dissertação apresenta uma revisão do estado da arte e das principais técnicas que tratam o problema da separação cega, através da análise comparativa entre três algoritmos: AMUSE, JADE e FLEXICA. Para realizar as referida comparação, os algoritmos são aplicados a sinais de teste, sinais de comunicações e sinais biomédicos reais

    Contribuições multivariadas na decomposição de uma série temporal

    Get PDF
    One of the goals of time series analysis is to extract essential features from the series for exploratory or predictive purposes. The SSA is a method used for this intent, transforming the original series into a Hankel matrix, also called a trajectory matrix. Its only parameter is the so-called window length. The decomposition into singular values of the trajectory matrix allows the separation of the series components since the structure in terms of singular values and vectors is somehow associated with the trend, oscillatory component, and noise. In turn, the visualization of the steps of that method is little explored or lacks interpretability. In this work, we take advantage of the results of a particular decomposition into singular values using the NIPALS algorithm to implement a graphical display of the principal components using HJ-biplots, naming the method SSA-HJ-biplot. It is an exploratory tool whose main objective is to increase the visual interpretability of the SSA, facilitating the grouping step and, consequently, identifying characteristics of the time series. By exploring the properties of the HJ-biplots and adjusting the window length to half the series length, rows and columns of the trajectory matrix can be represented in the same SSA-HJ-biplot simultaneously and optimally. To circumvent the potential problem of structural changes in the time series, which can make it challenging to visualize the separation of the components, we propose a methodology for the detection of change points and the application of the SSA-HJ-biplot in homogeneous intervals, that is, between change points. This detection approach is based on sudden changes in the direction of the principal components, which are evaluated by a distance metric created for this purpose. Finally, we developed another visualization method based on SSA to estimate the dominant periodicities of a time series through geometric patterns, which we call the SSA Biplot Area. In this part of the research, we implemented a package in R called areabiplot, available on the Comprehensive R Archive Network (CRAN).Um dos objetivos da análise de séries temporais é extrair características essenciais da série para fins exploratórios ou preditivos. A Análise Espectral Singular (SSA) é um método utilizado para esse fim, transformando a série original em uma matriz de Hankel, também chamada de matriz trajetória. O seu único parâmetro é o chamado comprimento da janela. A decomposição em valores singulares da matriz trajetória permite a separação das componentes da série, uma vez que a estrutura em termos de valores e vetores singulares está de alguma forma associada à tendência, componente oscilatória e ruído. Por sua vez, a visualização das etapas daquele método é pouco explorada ou carece de interpretabilidade. Neste trabalho, aproveitamos os resultados de uma particular decomposição em valores singulares através do algoritmo NIPALS para implementar uma exibição gráfica das componentes principais usando HJ-biplots, nomeando-o método SSA-HJ-biplot. Trata-se de uma ferramenta de natureza exploratória e cujo principal objetivo é aumentar a interpretabilidade visual da SSA, facilitando o passo de agrupamento e, consequentemente, identificar características da série temporal. Ao explorar as propriedades dos HJ-biplots e ajustar o comprimento da janela para a metade do comprimento série, linhas e colunas da matriz trajetória podem ser representadas em um mesmo SSA-HJ-biplot simultaneamente e de maneira ótima. Para contornar o potencial problema de mudanças estruturais na série temporal, que podem dificultar a visualização da separação das componentes, propomos uma metodologia para a detecção de change points e a aplicação do SSA-HJ-biplot em intervalos homogéneos, ou seja, entre change points. Essa abordagem de detecção é baseada em mudanças bruscas na direção das componentes principais, que são avaliadas por uma métrica de distância criada para esse fim. Por fim, desenvolvemos um outro método de visualização baseado na SSA para estimar as periodicidades dominantes de uma série temporal por meio de padrões geométricos, ao que chamamos SSA Área biplot. Nesta parte da investigação, implementámos em R um pacote chamado areabiplot, disponível na Comprehensive R Archive Network (CRAN).Programa Doutoral em Matemátic
    corecore