340 research outputs found

    A comparative study of signal processing methods for structural health monitoring

    Get PDF
    In this paper four non-parametric and five parametric signal processing techniques are reviewed and their performances are compared through application to a sample exponentially damped synthetic signal with closely-spaced frequencies representing the ambient response of structures. The non-parametric methods are Fourier transform, periodogram estimate of power spectral density, wavelet transform, and empirical mode decomposition with Hilbert spectral analysis (Hilbert-Huang transform). The parametric methods are pseudospectrum estimate using the multiple signal categorization (MUSIC), empirical wavelet transform, approximate Prony method, matrix pencil method, and the estimation of signal parameters by rotational invariance technique (ESPRIT) method. The performances of different methods are studied statistically using the Monte Carlo simulation and the results are presented in terms of average errors of multiple sample analyses

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Aeroelastic Wing Flutter Testing and Analysis

    Get PDF
    La integración de nuevas cargas subalares en una aeronave modifica las características de distribución de masa (centro de gravedad) y momento de inercia del ala. Este efecto, sumado a la contribución de las cargas aerodinámicas, produce que los modos y frecuencias propias de vibración varíen con la presión dinámica (función de la velocidad de vuelo y altitud). Este fenómeno fuertemente no lineal implica que, bajo determinadas condiciones de presión dinámica, se produzca el acoplamiento en frecuencia (resonancia autosostenida) de dos o más modos de vibración inicialmente ortogonales entre sí. El fenómeno aeroelástico anterior se conoce como "flameo" ("flutter" en inglés), que salvo cambio de las condiciones de vuelo, llevará a la pérdida de la aeronave y la vida del piloto. Por otra parte, la integración de nuevas cargas subalares requiere llevar a cabo una serie de procesos que conducirán a una nueva envolvente de vuelo, dentro de la cual se garantice que la aeronave puede volar con seguridad. Este estudio requiere llevar a cabo cálculos teóricos para predecir las condiciones de flameo y una posterior validación mediante ensayos en vuelo, conocido como "expandir la envolvente". Ejecutar esta tarea con seguridad requiere unos medios y personal altamente cualificados y especializado, cuyos costes derivados son extraordinariamente elevados. Como consecuencia, las empresas especializadas llevan a cabo estos ensayos y guardan los resultados como secreto industrial. Todo lo anterior justifica que sea muy complicado encontrar métodos validados para procesar datos de vuelos y extraer los parámetros de vibración a distintas presiones dinámicas. Entre los distintos métodos publicados para identificar parámetros de vibración de vuelos de ensayos de flameo, la gran mayoría han sido verificados únicamente con modelos teóricos, dándose el caso de que muchos de ellos dan resultados incongruentes entre sí o que al ser validados con datos reales arrojan resultados incoherentes. Por este motivo, el objetivo principal era desarrollar técnicas robustas, coherentes y repetitivas para procesar datos de vuelo de flameo. El autor del presente estudio ha tenido acceso a una base de datos de ensayos en vuelo de flameo, cortesía del Ejército del Aire de España, y cuenta con autorización de la Oficina de Comunicaciones del Ejército del Aire para publicar resultados de su investigación sobre esos datos. La presente tesis desarrolla dos métodos de procesado de datos de ensayos en vuelo de flameo específicos sobre datos procedentes de una excitación tipo "Sine-Dwell". El primero está basado en un modelo matemático y en técnicas de optimización. El segundo en técnicas de aprendizaje profundo. El desarrollo de ambas técnicas se inicia con una primera verificación de distintas técnicas documentadas en la literatura científica, seguidos por el entrenamiento de las siguientes redes neuronales; De perceptrón multicapa, redes neuronales profundas y redes neuronales convolucionales. Establecida una línea de base de comparación, se procedió a seleccionar una técnica clásica (basada en modelo teórico y optimización), de acuerdo con la fuente bibliográfica, validada con datos reales procedentes de ensayos en vuelo de flameo y una de las redes neuronales entrenadas. Partiendo de las lecciones aprendidas se desarrolló una técnica innovadora basada en el modelo clásico de modelo teórico y optimización, verificación con datos sintéticos y comparación de las tres técnicas seleccionadas anteriormente. Finalmente, las tres técnicas fueron validadas con datos reales de ensayos en vuelo de flameo. Los resultados obtenidos son altamente satisfactorios, alcanzando los objetivos previstos inicialmente. Las técnicas presentadas se han verificado con datos sintéticos, comparadas con modelos bibliográficos previamente validados de forma independiente, y validadas en este estudio con datos reales. Los resultados son coherentes con lo esperado. La velocidad de proceso permite el análisis de los datos en tiempo real, aumentan la consciencia situacional del director de ensayos y facilitan la toma de decisiones para continuar o detener el test, en condiciones de peligro, con mayor seguridad

    Spectral analysis of phonocardiographic signals using advanced parametric methods

    Get PDF

    Characterization, Classification, and Genesis of Seismocardiographic Signals

    Get PDF
    Seismocardiographic (SCG) signals are the acoustic and vibration induced by cardiac activity measured non-invasively at the chest surface. These signals may offer a method for diagnosing and monitoring heart function. Successful classification of SCG signals in health and disease depends on accurate signal characterization and feature extraction. In this study, SCG signal features were extracted in the time, frequency, and time-frequency domains. Different methods for estimating time-frequency features of SCG were investigated. Results suggested that the polynomial chirplet transform outperformed wavelet and short time Fourier transforms. Many factors may contribute to increasing intrasubject SCG variability including subject posture and respiratory phase. In this study, the effect of respiration on SCG signal variability was investigated. Results suggested that SCG waveforms can vary with lung volume, respiratory flow direction, or a combination of these criteria. SCG events were classified into groups belonging to these different respiration phases using classifiers, including artificial neural networks, support vector machines, and random forest. Categorizing SCG events into different groups containing similar events allows more accurate estimation of SCG features. SCG feature points were also identified from simultaneous measurements of SCG and other well-known physiologic signals including electrocardiography, phonocardiography, and echocardiography. Future work may use this information to get more insights into the genesis of SCG

    Preserved Edge Convolutional Neural Network for Sensitivity Enhancement of Deuterium Metabolic Imaging (DMI)

    Full text link
    Purpose: Common to most MRSI techniques, the spatial resolution and the minimal scan duration of Deuterium Metabolic Imaging (DMI) are limited by the achievable SNR. This work presents a deep learning method for sensitivity enhancement of DMI. Methods: A convolutional neural network (CNN) was designed to estimate the 2H-labeled metabolite concentrations from low SNR and distorted DMI FIDs. The CNN was trained with synthetic data that represent a range of SNR levels typically encountered in vivo. The estimation precision was further improved by fine-tuning the CNN with MRI-based edge-preserving regularization for each DMI dataset. The proposed processing method, PReserved Edge ConvolutIonal neural network for Sensitivity Enhanced DMI (PRECISE-DMI), was applied to simulation studies and in vivo experiments to evaluate the anticipated improvements in SNR and investigate the potential for inaccuracies. Results: PRECISE-DMI visually improved the metabolic maps of low SNR datasets, and quantitatively provided higher precision than the standard Fourier reconstruction. Processing of DMI data acquired in rat brain tumor models resulted in more precise determination of 2H-labeled lactate and glutamate + glutamine levels, at increased spatial resolution (from >8 to 2 μ\muL) or shortened scan time (from 32 to 4 min) compared to standard acquisitions. However, rigorous SD-bias analyses showed that overuse of the edge-preserving regularization can compromise the accuracy of the results. Conclusion: PRECISE-DMI allows a flexible trade-off between enhancing the sensitivity of DMI and minimizing the inaccuracies. With typical settings, the DMI sensitivity can be improved by 3-fold while retaining the capability to detect local signal variations
    corecore