31 research outputs found

    On the minimal ranks of matrix pencils and the existence of a best approximate block-term tensor decomposition

    Full text link
    Under the action of the general linear group with tensor structure, the ranks of matrices AA and BB forming an m×nm \times n pencil A+λBA + \lambda B can change, but in a restricted manner. Specifically, with every pencil one can associate a pair of minimal ranks, which is unique up to a permutation. This notion can be defined for matrix pencils and, more generally, also for matrix polynomials of arbitrary degree. In this paper, we provide a formal definition of the minimal ranks, discuss its properties and the natural hierarchy it induces in a pencil space. Then, we show how the minimal ranks of a pencil can be determined from its Kronecker canonical form. For illustration, we classify the orbits according to their minimal ranks (under the action of the general linear group) in the case of real pencils with m,n4m, n \le 4. Subsequently, we show that real regular 2k×2k2k \times 2k pencils having only complex-valued eigenvalues, which form an open positive-volume set, do not admit a best approximation (in the norm topology) on the set of real pencils whose minimal ranks are bounded by 2k12k-1. Our results can be interpreted from a tensor viewpoint, where the minimal ranks of a degree-(d1)(d-1) matrix polynomial characterize the minimal ranks of matrices constituting a block-term decomposition of an m×n×dm \times n \times d tensor into a sum of matrix-vector tensor products.Comment: This work was supported by the European Research Council under the European Programme FP7/2007-2013, Grant AdG-2013-320594 "DECODA.

    Joint Majorization-Minimization for Nonnegative Matrix Factorization with the β\beta-divergence

    Full text link
    This article proposes new multiplicative updates for nonnegative matrix factorization (NMF) with the β\beta-divergence objective function. Our new updates are derived from a joint majorization-minimization (MM) scheme, in which an auxiliary function (a tight upper bound of the objective function) is built for the two factors jointly and minimized at each iteration. This is in contrast with the classic approach in which a majorizer is derived for each factor separately. Like that classic approach, our joint MM algorithm also results in multiplicative updates that are simple to implement. They however yield a significant drop of computation time (for equally good solutions), in particular for some β\beta-divergences of important applicative interest, such as the squared Euclidean distance and the Kullback-Leibler or Itakura-Saito divergences. We report experimental results using diverse datasets: face images, an audio spectrogram, hyperspectral data and song play counts. Depending on the value of β\beta and on the dataset, our joint MM approach can yield CPU time reductions from about 13%13\% to 78%78\% in comparison to the classic alternating scheme

    Statistical efficiency of structured cpd estimation applied to Wiener-Hammerstein modeling

    Get PDF
    Accepted for publication in the Proceedings of the European Signal Processing Conference (EUSIPCO) 2015.International audienceThe computation of a structured canonical polyadic decomposition (CPD) is useful to address several important modeling problems in real-world applications. In this paper, we consider the identification of a nonlinear system by means of a Wiener-Hammerstein model, assuming a high-order Volterra kernel of that system has been previously estimated. Such a kernel, viewed as a tensor, admits a CPD with banded circulant factors which comprise the model parameters. To estimate them, we formulate specialized estimators based on recently proposed algorithms for the computation of structured CPDs. Then, considering the presence of additive white Gaussian noise, we derive a closed-form expression for the Cramer-Rao bound (CRB) associated with this estimation problem. Finally, we assess the statistical performance of the proposed estimators via Monte Carlo simulations, by comparing their mean-square error with the CRB

    On the Accuracy of Hotelling-Type Asymmetric Tensor Deflation: A Random Tensor Analysis

    Full text link
    This work introduces an asymptotic study of Hotelling-type tensor deflation in the presence of noise, in the regime of large tensor dimensions. Specifically, we consider a low-rank asymmetric tensor model of the form i=1rβiAi+W\sum_{i=1}^r \beta_i{\mathcal{A}}_i + {\mathcal{W}} where βi0\beta_i\geq 0 and the Ai{\mathcal{A}}_i's are unit-norm rank-one tensors such that Ai,Aj[0,1]\left| \langle {\mathcal{A}}_i, {\mathcal{A}}_j \rangle \right| \in [0, 1] for iji\neq j and W{\mathcal{W}} is an additive noise term. Assuming that the dominant components are successively estimated from the noisy observation and subsequently subtracted, we leverage recent advances in random tensor theory in the regime of asymptotically large tensor dimensions to analytically characterize the estimated singular values and the alignment of estimated and true singular vectors at each step of the deflation procedure. Furthermore, this result can be used to construct estimators of the signal-to-noise ratios βi\beta_i and the alignments between the estimated and true rank-1 signal components.Comment: Accepted at IEEE CAMSAP 2023. See also companion paper arXiv:2304.10248 for the symmetric case. arXiv admin note: text overlap with arXiv:2211.0900

    A study on tensor and matrix models for super-resolution fluorescence microscopy

    Get PDF
    International audienceSuper-resolution techniques for fluorescence microscopy areinvaluable tools for studying phenomena that take place atsub-cellular scales, thanks to their capability of overcominglight diffraction. Yet, achieving sufficient temporal resolutionfor imaging live-cell processes remains a challenging prob-lem. Exploiting the temporal fluctuations (blinking) of fluo-rophores is a promising approach that allows employing stan-dard equipment and harmless excitation levels. In this work,we study a novel constrained tensor modeling approach thattakes this temporal diversity into account to estimate the spa-tial distribution of fluorophores and their overall intensities.We compare this approach with an also novel matrix-basedformulation which promotes structured sparsity via a continu-ous approximation of the cardinality function, as well as withother state-of-the-art methods

    I Diretriz brasileira de cardio-oncologia pediátrica da Sociedade Brasileira de Cardiologia

    Get PDF
    Sociedade Brasileira de Oncologia PediátricaUniversidade Federal de São Paulo (UNIFESP) Instituto de Oncologia Pediátrica GRAACCUniversidade Federal de São Paulo (UNIFESP)Universidade de São Paulo Faculdade de Medicina Instituto do Coração do Hospital das ClínicasUniversidade Federal do Rio Grande do Sul Hospital de Clínicas de Porto AlegreInstituto Materno-Infantil de PernambucoHospital de Base de BrasíliaUniversidade de Pernambuco Hospital Universitário Oswaldo CruzHospital A.C. CamargoHospital do CoraçãoSociedade Brasileira de Cardiologia Departamento de Cardiopatias Congênitas e Cardiologia PediátricaInstituto Nacional de CâncerHospital Pequeno PríncipeSanta Casa de Misericórdia de São PauloInstituto do Câncer do Estado de São PauloUniversidade Federal de São Paulo (UNIFESP) Departamento de PatologiaHospital Infantil Joana de GusmãoUNIFESP, Instituto de Oncologia Pediátrica GRAACCUNIFESP, Depto. de PatologiaSciEL

    O perfil semiológico do paciente portador de hemorragia digestiva alta

    Get PDF
    OBJETIVO: O seguinte estudo objetivou descrever a semiologia do paciente portador de hemorragia digestiva alta, considerando como determinante na avaliação de potencias focos hemorrágicos. METODOLOGIA: Foram realizadas buscas nas plataformas do SciELO, LILACS, PubMed, Scopus e Google Scholar,utilizando os descritores gastrointestinal bleeding, peptic ulcerous disease e varicose hemorrhage, sendo identificados 35 estudos, dos quais foram incluídos 13 artigos completos. Desses estudos, 5 avaliaram as principais etiologias, 2 o surgimento de novos testes diagnósticos, 2 analisaram os aspectos epidemiológicos e 1 a sintomatologia apresentada pelo acometimento da hemorragia digestiva alta. Observou-se inicialmente a abundâncias de informações conceituais sobre o sangramento, como um transtorno clínico comum, acompanhada de inúmeras manifestações, considerando que o foco hemorrágico pode ocorrer em qualquer porção do trato gastrointestinal. Neste estudo, todas as publicações eleitas apresentaram o quadro semiológico composto por algia abdominal, indícios de choque hipovolêmico e taquicardia, alguns exibiram quedas abruptas da pressão arterial, odinofagia, êmese, náuseas e estado ictérico. Os pacientes implicados, cronicamente, já manifestaram ocorrências prévias, devido ao caráter recidivante torna-se essencial investigar a existência de varizes, fístula aorto-entérica, angiodisplasia e doença ulcerosa. CONCLUSÃO: Elucida-se que a hemorragia digestiva alta representa a principal causa de sangramento do trato gastrointestinal, majoritamente manifesta-se como hematêmese ou melena e cursam com o quadro sintomatológico que auxilia na avaliação da gravidade deste e o embasamento de potenciais focos de sangramento e que contribuam para disseminação de informações e intervenções futuras

    Efficient derivation and use of reference Volterra filters for the evaluation of non-linear formalisms.

    No full text
    O modelamento matemático de sistemas físicos é fundamental para diversas aplicações de processamento digital de sinais (PDS). Em muitos dos problemas enfrentados nesse contexto, para que um modelo seja útil, é necessário que ele represente seu análogo físico com precisão e possua características favoráveis para implementação, como estabilidade e compacidade. A obtenção de um modelo que atenda a estes requisitos depende da escolha de um formalismo matemático apropriado. Em se tratando do modelamento de sistemas (significativamente) não-lineares, tal decisão é particularmente desafiadora, uma vez que muitos formalismos com propriedades diferentes foram propostos na literatura. Basicamente, isto se deve à inexistência de uma teoria completa e geral para sistemas não-lineares, diferentemente do que ocorre no caso linear. Porém, em diversos trabalhos que lidam com aplicações nas quais é necessário modelar dispositivos não-lineares, adota-se alguma representação sem que sejam fornecidas justificativas claras e fundamentadas em características físicas do sistema a ser modelado. Ao invés disso, esse importante aspecto é discutido apenas superficialmente, com base em argumentos informais ou heurísticos. Adicionalmente, a definição de certas características estruturais de um modelo que possuem grande impacto sobre seu desempenho frequentemente não é feita de maneira sistemática, o que dificulta uma compreensão precisa do potencial do formalismo subjacente. Visando auxiliar na escolha por um formalismo adequado em aplicações de PDS, neste trabalho propõe-se uma metodologia de avaliação do desempenho de formalismos não-lineares que se apoia sobre considerações físicas. Para tanto, emprega-se um modelo físico do sistema de interesse como referência. Mais especificamente, a estratégia adotada baseia-se em fazer uso do método de bilinearização de Carleman para se obter, a partir deste modelo e de um conjunto de parâmetros típicos, um conjunto de núcleos de Volterra de referência. Uma vez que os núcleos de referência são obtidos, pode-se estimar, por exemplo, a ordem e a extensão de memória mínimas que um filtro de Volterra convencional deve possuir para se atingir o nível de precisão desejado, o que permite avaliar se o uso de modelos deste tipo é viável em termos de custo computacional. Quando este não é o caso, as informações fornecidas pelos núcleos podem ser exploradas para se escolher outra representação, como uma estrutura modular ou uma estrutura de Volterra alternativa. Além disso, os núcleos de referência são úteis ainda para se realizar uma avaliação quantitativa do desempenho da estrutura escolhida e compará-lo com aquele apresentado por um filtro de Volterra convencional. Para a realização do cômputo dos núcleos de referência, um algoritmo que implementa eficientemente o método de Carleman foi proposto. Tal algoritmo, juntamente com a ideia básica da metodologia desenvolvida, constituem as principais contribuições deste trabalho. Como estudo de caso, emprega-se um modelo físico para alto-falantes disponível na literatura para a avaliação da adequação de diversas estruturas ao modelamento de dispositivos deste tipo. Com este exemplo, demonstra-se a utilidade dos núcleos de referência para as finalidades supracitadas.The mathematical modeling of physical systems is essential for several digital signal processing (DSP) applications. In many problems faced in this context, if a model is to be useful, it must represent its physical analog with precision and possess characteristics that favour implementation, such as stability and compactness. In order to obtain a model that meets those requirements, it is indispensable to choose an appropriate formalism. Regarding the modeling of (significantly) nonlinear systems, this decision is a particularly challenging problem, since many formalisms with different properties have been proposed in the literature. Basically, this is due to the inexistence of a complete and general theory for nonlinear systems as there is in the linear case. In several works that deal with applications in which it is necessary to model nonlinear devices, some representation is adopted without the provision of clear and physically motivated justifications. Instead, this important aspect is discussed only superficially, based on an informal or heuristic reasoning. Additionally, the definition of certain structural characteristics of a model which have great influence on its performance is frequently done in a non-systematic manner, which difficults a precise comprehension of the potential of the underlying formalism. Aiming to assist the choice of an adequate formalism in DSP applications, in this work we propose a methodology for evaluating the performance of nonlinear models that relies on physical considerations. For this purpose, a physical model of the system of interest is used as a reference. Specifically, the adopted strategy is based on using the Carleman bilinearization method for obtaining a set of reference Volterra kernels from that model, considering typical parameter values. Once the reference kernels are obtained, we can estimate, for instance, the order and the minimal memory extension that a conventional Volterra filter must have in order to achieve the desired precision level, which allows us to assess whether using models of this type is feasible in terms of computational cost. When this is not the case, the information provided by the kernels may be exploited for choosing another representation, as a modular structure or an alternative Volterra structure. Furthermore, the reference kernels are also useful for quantitatively evaluating the performance of the chosen structure and for comparing it with a conventional Volterra filter. To perform the reference kernels computation, an efficient algorithm for the Carleman method is proposed. This algorithm, together with the basic idea of the developed methodology, constitute the main contributions of this work. As a case study, a physical model for loudspeakers available in the literature is employed for assessing the suitableness of several structures for modeling devices of this kind. With this example, we show the utility of the reference kernels for the aforementioned purposes

    A general iterative imputation scheme with feedback control for tensor completion (IFCTC)

    No full text
    National audienceTensors and tensor decompositions are very useful mathematical tools for representing and analyzing multidimensional data. The problem of estimating missing data in a tensor of measurements, named tensor completion, plays an important role in numerous applications. In this paper, to solve this problem, we propose a general iterative imputation scheme including a first-order feedback mechanism, aiming to improve algorithm performance. Two particularizations of this scheme, in which we apply soft and hard thresholding operators based on the Tucker model, are discussed. Then, simulation results are presented to illustrate their performance.Les tenseurs et les décompositions tensorielles constituent des outils mathématiques très utiles pour représenter et analyser des données multidimensionnelles. Le problème de l'estimation de données manquantes dans un tenseur de mesures joue un rôle important dans de nombreuses applications. Dans cet article, nous proposons un schéma d'imputation général itératif incluant un mécanisme de rétroaction du premier ordre, avec l'objectif d'améliorer la performance de l'algorithme. Deux cas particuliers de ce schéma, faisant intervenir des opérateurs de seuillage doux et dur basés sur le modèle de Tucker, sont discutés. Puis, des résultats de simulations sont présentés pour illustrer leur performance

    An algebraic solution for the Candecomp/PARAFAC decomposition with circulant factors

    Get PDF
    International audienceThe Candecomp/PARAFAC decomposition (CPD) is an important mathematical tool used in several fields of application. Yet, its computation is usually performed with iterative methods which are subject to reaching local minima and to exhibiting slow convergence. In some practical contexts, the data tensors of interest admit decompositions constituted by matrix factors with particular structure. Often, such structure can be exploited for devising specialized algorithms with superior properties in comparison with general iterative methods. In this paper, we propose a novel approach for computing a circulant-constrained CPD (CCPD), i.e., a CPD of a hypercubic tensor whose factors are all circulant (and possibly tall). To this end, we exploit the algebraic structure of such tensor, showing that the elements of its frequency-domain counterpart satisfy homogeneous monomial equations in the eigenvalues of square circulant matrices associated with its factors, which we can therefore estimate by solving these equations. Then, we characterize the sets of solutions admitted by such equations under Kruskal's uniqueness condition. Simulation results are presented, validating our approach and showing that it can help avoiding typical disadvantages of iterative methods
    corecore