14,100 research outputs found

    Generalized current distribution rule

    Get PDF
    Method helps determine branch current in parallel-series network in relation to total input current by inspection. Method is particularly useful for circuits with many elements when branch elements are described as admittances. If element values are variables, then these values may be expressed as admittances to find currents readily in desired branches

    RED: Deep Recurrent Neural Networks for Sleep EEG Event Detection

    Full text link
    The brain electrical activity presents several short events during sleep that can be observed as distinctive micro-structures in the electroencephalogram (EEG), such as sleep spindles and K-complexes. These events have been associated with biological processes and neurological disorders, making them a research topic in sleep medicine. However, manual detection limits their study because it is time-consuming and affected by significant inter-expert variability, motivating automatic approaches. We propose a deep learning approach based on convolutional and recurrent neural networks for sleep EEG event detection called Recurrent Event Detector (RED). RED uses one of two input representations: a) the time-domain EEG signal, or b) a complex spectrogram of the signal obtained with the Continuous Wavelet Transform (CWT). Unlike previous approaches, a fixed time window is avoided and temporal context is integrated to better emulate the visual criteria of experts. When evaluated on the MASS dataset, our detectors outperform the state of the art in both sleep spindle and K-complex detection with a mean F1-score of at least 80.9% and 82.6%, respectively. Although the CWT-domain model obtained a similar performance than its time-domain counterpart, the former allows in principle a more interpretable input representation due to the use of a spectrogram. The proposed approach is event-agnostic and can be used directly to detect other types of sleep events.Comment: 8 pages, 5 figures. In proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN 2020

    A quasi-Newton approach to optimization problems with probability density constraints

    Get PDF
    A quasi-Newton method is presented for minimizing a nonlinear function while constraining the variables to be nonnegative and sum to one. The nonnegativity constraints were eliminated by working with the squares of the variables and the resulting problem was solved using Tapia's general theory of quasi-Newton methods for constrained optimization. A user's guide for a computer program implementing this algorithm is provided

    Invariant and polynomial identities for higher rank matrices

    Full text link
    We exhibit explicit expressions, in terms of components, of discriminants, determinants, characteristic polynomials and polynomial identities for matrices of higher rank. We define permutation tensors and in term of them we construct discriminants and the determinant as the discriminant of order dd, where dd is the dimension of the matrix. The characteristic polynomials and the Cayley--Hamilton theorem for higher rank matrices are obtained there from
    • …
    corecore