21,198 research outputs found

    Computing the Partial Correlation of ICA Models for Non-Gaussian Graph Signal Processing

    Full text link
    [EN] Conventional partial correlation coefficients (PCC) were extended to the non-Gaussian case, in particular to independent component analysis (ICA) models of the observed multivariate samples. Thus, the usual methods that define the pairwise connections of a graph from the precision matrix were correspondingly extended. The basic concept involved replacing the implicit linear estimation of conventional PCC with a nonlinear estimation (conditional mean) assuming ICA. Thus, it is better eliminated the correlation between a given pair of nodes induced by the rest of nodes, and hence the specific connectivity weights can be better estimated. Some synthetic and real data examples illustrate the approach in a graph signal processing context.This research was funded by Spanish Administration and European Union under grants TEC2014-58438-R and TEC2017-84743-P.Belda, J.; Vergara Domínguez, L.; Safont Armero, G.; Salazar Afanador, A. (2019). Computing the Partial Correlation of ICA Models for Non-Gaussian Graph Signal Processing. Entropy. 21(1):1-16. https://doi.org/10.3390/e21010022S116211Baba, K., Shibata, R., & Sibuya, M. (2004). PARTIAL CORRELATION AND CONDITIONAL CORRELATION AS MEASURES OF CONDITIONAL INDEPENDENCE. Australian New Zealand Journal of Statistics, 46(4), 657-664. doi:10.1111/j.1467-842x.2004.00360.xShuman, D. I., Narang, S. K., Frossard, P., Ortega, A., & Vandergheynst, P. (2013). The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Processing Magazine, 30(3), 83-98. doi:10.1109/msp.2012.2235192Sandryhaila, A., & Moura, J. M. F. (2013). Discrete Signal Processing on Graphs. IEEE Transactions on Signal Processing, 61(7), 1644-1656. doi:10.1109/tsp.2013.2238935Ortega, A., Frossard, P., Kovacevic, J., Moura, J. M. F., & Vandergheynst, P. (2018). Graph Signal Processing: Overview, Challenges, and Applications. Proceedings of the IEEE, 106(5), 808-828. doi:10.1109/jproc.2018.2820126Mazumder, R., & Hastie, T. (2012). The graphical lasso: New insights and alternatives. Electronic Journal of Statistics, 6(0), 2125-2149. doi:10.1214/12-ejs740Chen, X., Xu, M., & Wu, W. B. (2013). Covariance and precision matrix estimation for high-dimensional time series. The Annals of Statistics, 41(6), 2994-3021. doi:10.1214/13-aos1182Friedman, J., Hastie, T., & Tibshirani, R. (2007). Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3), 432-441. doi:10.1093/biostatistics/kxm045Peng, J., Wang, P., Zhou, N., & Zhu, J. (2009). Partial Correlation Estimation by Joint Sparse Regression Models. Journal of the American Statistical Association, 104(486), 735-746. doi:10.1198/jasa.2009.0126Belda, J., Vergara, L., Salazar, A., & Safont, G. (2018). Estimating the Laplacian matrix of Gaussian mixtures for signal processing on graphs. Signal Processing, 148, 241-249. doi:10.1016/j.sigpro.2018.02.017Hyvärinen, A., & Oja, E. (2000). Independent component analysis: algorithms and applications. Neural Networks, 13(4-5), 411-430. doi:10.1016/s0893-6080(00)00026-5Chai, R., Naik, G. R., Nguyen, T. N., Ling, S. H., Tran, Y., Craig, A., & Nguyen, H. T. (2017). Driver Fatigue Classification With Independent Component by Entropy Rate Bound Minimization Analysis in an EEG-Based System. IEEE Journal of Biomedical and Health Informatics, 21(3), 715-724. doi:10.1109/jbhi.2016.2532354Liu, H., Liu, S., Huang, T., Zhang, Z., Hu, Y., & Zhang, T. (2016). Infrared spectrum blind deconvolution algorithm via learned dictionaries and sparse representation. Applied Optics, 55(10), 2813. doi:10.1364/ao.55.002813Naik, G. R., Selvan, S. E., & Nguyen, H. T. (2016). Single-Channel EMG Classification With Ensemble-Empirical-Mode-Decomposition-Based ICA for Diagnosing Neuromuscular Disorders. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 24(7), 734-743. doi:10.1109/tnsre.2015.2454503Guo, Y., Huang, S., Li, Y., & Naik, G. R. (2013). Edge Effect Elimination in Single-Mixture Blind Source Separation. Circuits, Systems, and Signal Processing, 32(5), 2317-2334. doi:10.1007/s00034-013-9556-9Chi, Y. (2016). Guaranteed Blind Sparse Spikes Deconvolution via Lifting and Convex Optimization. IEEE Journal of Selected Topics in Signal Processing, 10(4), 782-794. doi:10.1109/jstsp.2016.2543462Pendharkar, G., Naik, G. R., & Nguyen, H. T. (2014). Using Blind Source Separation on accelerometry data to analyze and distinguish the toe walking gait from normal gait in ITW children. Biomedical Signal Processing and Control, 13, 41-49. doi:10.1016/j.bspc.2014.02.009Wang, L., & Chi, Y. (2016). Blind Deconvolution From Multiple Sparse Inputs. IEEE Signal Processing Letters, 23(10), 1384-1388. doi:10.1109/lsp.2016.2599104Safont, G., Salazar, A., Vergara, L., Gomez, E., & Villanueva, V. (2018). Probabilistic Distance for Mixtures of Independent Component Analyzers. IEEE Transactions on Neural Networks and Learning Systems, 29(4), 1161-1173. doi:10.1109/tnnls.2017.2663843Safont, G., Salazar, A., Rodriguez, A., & Vergara, L. (2014). On Recovering Missing Ground Penetrating Radar Traces by Statistical Interpolation Methods. Remote Sensing, 6(8), 7546-7565. doi:10.3390/rs6087546Vergara, L., & Bernabeu, P. (2001). Simple approach to nonlinear prediction. Electronics Letters, 37(14), 926. doi:10.1049/el:20010616Ertuğrul Çelebi, M. (1997). General formula for conditional mean using higher order statistics. Electronics Letters, 33(25), 2097. doi:10.1049/el:19971432Lee, T.-W., Girolami, M., & Sejnowski, T. J. (1999). Independent Component Analysis Using an Extended Infomax Algorithm for Mixed Subgaussian and Supergaussian Sources. Neural Computation, 11(2), 417-441. doi:10.1162/089976699300016719Cardoso, J. F., & Souloumiac, A. (1993). Blind beamforming for non-gaussian signals. IEE Proceedings F Radar and Signal Processing, 140(6), 362. doi:10.1049/ip-f-2.1993.0054Hyvärinen, A., & Oja, E. (1997). A Fast Fixed-Point Algorithm for Independent Component Analysis. Neural Computation, 9(7), 1483-1492. doi:10.1162/neco.1997.9.7.1483Salazar, A., Vergara, L., & Miralles, R. (2010). On including sequential dependence in ICA mixture models. Signal Processing, 90(7), 2314-2318. doi:10.1016/j.sigpro.2010.02.010Lang, E. W., Tomé, A. M., Keck, I. R., Górriz-Sáez, J. M., & Puntonet, C. G. (2012). Brain Connectivity Analysis: A Short Survey. Computational Intelligence and Neuroscience, 2012, 1-21. doi:10.1155/2012/412512Fiedler, M. (1973). Algebraic connectivity of graphs. Czechoslovak Mathematical Journal, 23(2), 298-305. doi:10.21136/cmj.1973.101168Merris, R. (1994). Laplacian matrices of graphs: a survey. Linear Algebra and its Applications, 197-198, 143-176. doi:10.1016/0024-3795(94)90486-3Dong, X., Thanou, D., Frossard, P., & Vandergheynst, P. (2016). Learning Laplacian Matrix in Smooth Graph Signal Representations. IEEE Transactions on Signal Processing, 64(23), 6160-6173. doi:10.1109/tsp.2016.2602809Moragues, J., Vergara, L., & Gosalbez, J. (2011). Generalized Matched Subspace Filter for Nonindependent Noise Based on ICA. IEEE Transactions on Signal Processing, 59(7), 3430-3434. doi:10.1109/tsp.2011.2141668Egilmez, H. E., Pavez, E., & Ortega, A. (2017). Graph Learning From Data Under Laplacian and Structural Constraints. IEEE Journal of Selected Topics in Signal Processing, 11(6), 825-841. doi:10.1109/jstsp.2017.272697

    Deep Learning for Audio Signal Processing

    Full text link
    Given the recent surge in developments of deep learning, this article provides a review of the state-of-the-art deep learning techniques for audio signal processing. Speech, music, and environmental sound processing are considered side-by-side, in order to point out similarities and differences between the domains, highlighting general methods, problems, key references, and potential for cross-fertilization between areas. The dominant feature representations (in particular, log-mel spectra and raw waveform) and deep learning models are reviewed, including convolutional neural networks, variants of the long short-term memory architecture, as well as more audio-specific neural network models. Subsequently, prominent deep learning application areas are covered, i.e. audio recognition (automatic speech recognition, music information retrieval, environmental sound detection, localization and tracking) and synthesis and transformation (source separation, audio enhancement, generative models for speech, sound, and music synthesis). Finally, key issues and future questions regarding deep learning applied to audio signal processing are identified.Comment: 15 pages, 2 pdf figure

    Movie Description

    Get PDF
    Audio Description (AD) provides linguistic descriptions of movies and allows visually impaired people to follow a movie along with their peers. Such descriptions are by design mainly visual and thus naturally form an interesting data source for computer vision and computational linguistics. In this work we propose a novel dataset which contains transcribed ADs, which are temporally aligned to full length movies. In addition we also collected and aligned movie scripts used in prior work and compare the two sources of descriptions. In total the Large Scale Movie Description Challenge (LSMDC) contains a parallel corpus of 118,114 sentences and video clips from 202 movies. First we characterize the dataset by benchmarking different approaches for generating video descriptions. Comparing ADs to scripts, we find that ADs are indeed more visual and describe precisely what is shown rather than what should happen according to the scripts created prior to movie production. Furthermore, we present and compare the results of several teams who participated in a challenge organized in the context of the workshop "Describing and Understanding Video & The Large Scale Movie Description Challenge (LSMDC)", at ICCV 2015
    • …
    corecore