3 research outputs found

    Learned Factor Graphs for Inference from Stationary Time Sequences

    Full text link
    The design of methods for inference from time sequences has traditionally relied on statistical models that describe the relation between a latent desired sequence and the observed one. A broad family of model-based algorithms have been derived to carry out inference at controllable complexity using recursive computations over the factor graph representing the underlying distribution. An alternative model-agnostic approach utilizes machine learning (ML) methods. Here we propose a framework that combines model-based algorithms and data-driven ML tools for stationary time sequences. In the proposed approach, neural networks are developed to separately learn specific components of a factor graph describing the distribution of the time sequence, rather than the complete inference task. By exploiting stationary properties of this distribution, the resulting approach can be applied to sequences of varying temporal duration. Learned factor graph can be realized using compact neural networks that are trainable using small training sets, or alternatively, be used to improve upon existing deep inference systems. We present an inference algorithm based on learned stationary factor graphs, which learns to implement the sum-product scheme from labeled data, and can be applied to sequences of different lengths. Our experimental results demonstrate the ability of the proposed learned factor graphs to learn to carry out accurate inference from small training sets for sleep stage detection using the Sleep-EDF dataset, as well as for symbol detection in digital communications with unknown channels

    Analysis of Signal Decomposition and Stain Separation methods for biomedical applications

    Get PDF
    Nowadays, the biomedical signal processing and classification and medical image interpretation play an essential role in the detection and diagnosis of several human diseases. The problem of high variability and heterogeneity of information, which is extracted from digital data, can be addressed with signal decomposition and stain separation techniques which can be useful approaches to highlight hidden patterns or rhythms in biological signals and specific cellular structures in histological color images, respectively. This thesis work can be divided into two macro-sections. In the first part (Part I), a novel cascaded RNN model based on long short-term memory (LSTM) blocks is presented with the aim to classify sleep stages automatically. A general workflow based on single-channel EEG signals is developed to enhance the low performance in staging N1 sleep without reducing the performances in the other sleep stages (i.e. Wake, N2, N3 and REM). In the same context, several signal decomposition techniques and time-frequency representations are deployed for the analysis of EEG signals. All extracted features are analyzed by using a novel correlation-based timestep feature selection and finally the selected features are fed to a bidirectional RNN model. In the second part (Part II), a fully automated method named SCAN (Stain Color Adaptive Normalization) is proposed for the separation and normalization of staining in digital pathology. This normalization system allows to standardize digitally, automatically and in a few seconds, the color intensity of a tissue slide with respect to that of a target image, in order to improve the pathologist’s diagnosis and increase the accuracy of computer-assisted diagnosis (CAD) systems. Multiscale evaluation and multi-tissue comparison are performed for assessing the robustness of the proposed method. In addition, a stain normalization based on a novel mathematical technique, named ICD (Inverse Color Deconvolution) is developed for immunohistochemical (IHC) staining in histopathological images. In conclusion, the proposed techniques achieve satisfactory results compared to state-of-the-art methods in the same research field. The workflow proposed in this thesis work and the developed algorithms can be employed for the analysis and interpretation of other biomedical signals and for digital medical image analysis
    corecore