612 research outputs found

    Intestinal Parasites Classification Using Deep Belief Networks

    Full text link
    Currently, approximately 44 billion people are infected by intestinal parasites worldwide. Diseases caused by such infections constitute a public health problem in most tropical countries, leading to physical and mental disorders, and even death to children and immunodeficient individuals. Although subjected to high error rates, human visual inspection is still in charge of the vast majority of clinical diagnoses. In the past years, some works addressed intelligent computer-aided intestinal parasites classification, but they usually suffer from misclassification due to similarities between parasites and fecal impurities. In this paper, we introduce Deep Belief Networks to the context of automatic intestinal parasites classification. Experiments conducted over three datasets composed of eggs, larvae, and protozoa provided promising results, even considering unbalanced classes and also fecal impurities

    The Impact of 18 Ancestral and Horizontally-Acquired Regulatory Proteins upon the Transcriptome and sRNA Landscape of Salmonella enterica serovar Typhimurium

    Get PDF
    Article Authors Metrics Comments Media Coverage Abstract Author Summary Introduction Results and Discussion Materials and Methods Supporting Information Acknowledgments Author Contributions References Reader Comments (0) Media Coverage (0) Figures Abstract We know a great deal about the genes used by the model pathogen Salmonella enterica serovar Typhimurium to cause disease, but less about global gene regulation. New tools for studying transcripts at the single nucleotide level now offer an unparalleled opportunity to understand the bacterial transcriptome, and expression of the small RNAs (sRNA) and coding genes responsible for the establishment of infection. Here, we define the transcriptomes of 18 mutants lacking virulence-related global regulatory systems that modulate the expression of the SPI1 and SPI2 Type 3 secretion systems of S. Typhimurium strain 4/74. Using infection-relevant growth conditions, we identified a total of 1257 coding genes that are controlled by one or more regulatory system, including a sub-class of genes that reflect a new level of cross-talk between SPI1 and SPI2. We directly compared the roles played by the major transcriptional regulators in the expression of sRNAs, and discovered that the RpoS (σ38) sigma factor modulates the expression of 23% of sRNAs, many more than other regulatory systems. The impact of the RNA chaperone Hfq upon the steady state levels of 280 sRNA transcripts is described, and we found 13 sRNAs that are co-regulated with SPI1 and SPI2 virulence genes. We report the first example of an sRNA, STnc1480, that is subject to silencing by H-NS and subsequent counter-silencing by PhoP and SlyA. The data for these 18 regulatory systems is now available to the bacterial research community in a user-friendly online resource, SalComRegulon

    WiSeBE: Window-based Sentence Boundary Evaluation

    Full text link
    Sentence Boundary Detection (SBD) has been a major research topic since Automatic Speech Recognition transcripts have been used for further Natural Language Processing tasks like Part of Speech Tagging, Question Answering or Automatic Summarization. But what about evaluation? Do standard evaluation metrics like precision, recall, F-score or classification error; and more important, evaluating an automatic system against a unique reference is enough to conclude how well a SBD system is performing given the final application of the transcript? In this paper we propose Window-based Sentence Boundary Evaluation (WiSeBE), a semi-supervised metric for evaluating Sentence Boundary Detection systems based on multi-reference (dis)agreement. We evaluate and compare the performance of different SBD systems over a set of Youtube transcripts using WiSeBE and standard metrics. This double evaluation gives an understanding of how WiSeBE is a more reliable metric for the SBD task.Comment: In proceedings of the 17th Mexican International Conference on Artificial Intelligence (MICAI), 201

    Backprojection for Training Feedforward Neural Networks in the Input and Feature Spaces

    Full text link
    After the tremendous development of neural networks trained by backpropagation, it is a good time to develop other algorithms for training neural networks to gain more insights into networks. In this paper, we propose a new algorithm for training feedforward neural networks which is fairly faster than backpropagation. This method is based on projection and reconstruction where, at every layer, the projected data and reconstructed labels are forced to be similar and the weights are tuned accordingly layer by layer. The proposed algorithm can be used for both input and feature spaces, named as backprojection and kernel backprojection, respectively. This algorithm gives an insight to networks with a projection-based perspective. The experiments on synthetic datasets show the effectiveness of the proposed method.Comment: Accepted (to appear) in International Conference on Image Analysis and Recognition (ICIAR) 2020, Springe

    CubeNet: Equivariance to 3D Rotation and Translation

    Full text link
    3D Convolutional Neural Networks are sensitive to transformations applied to their input. This is a problem because a voxelized version of a 3D object, and its rotated clone, will look unrelated to each other after passing through to the last layer of a network. Instead, an idealized model would preserve a meaningful representation of the voxelized object, while explaining the pose-difference between the two inputs. An equivariant representation vector has two components: the invariant identity part, and a discernable encoding of the transformation. Models that can't explain pose-differences risk "diluting" the representation, in pursuit of optimizing a classification or regression loss function. We introduce a Group Convolutional Neural Network with linear equivariance to translations and right angle rotations in three dimensions. We call this network CubeNet, reflecting its cube-like symmetry. By construction, this network helps preserve a 3D shape's global and local signature, as it is transformed through successive layers. We apply this network to a variety of 3D inference problems, achieving state-of-the-art on the ModelNet10 classification challenge, and comparable performance on the ISBI 2012 Connectome Segmentation Benchmark. To the best of our knowledge, this is the first 3D rotation equivariant CNN for voxel representations.Comment: Preprin

    Physics-Informed Echo State Networks for Chaotic Systems Forecasting

    Full text link
    We propose a physics-informed Echo State Network (ESN) to predict the evolution of chaotic systems. Compared to conventional ESNs, the physics-informed ESNs are trained to solve supervised learning tasks while ensuring that their predictions do not violate physical laws. This is achieved by introducing an additional loss function during the training of the ESNs, which penalizes non-physical predictions without the need of any additional training data. This approach is demonstrated on a chaotic Lorenz system, where the physics-informed ESNs improve the predictability horizon by about two Lyapunov times as compared to conventional ESNs. The proposed framework shows the potential of using machine learning combined with prior physical knowledge to improve the time-accurate prediction of chaotic dynamical systems.Comment: 7 pages, 3 figure

    Physics-Informed Echo State Networks for Chaotic Systems Forecasting

    Get PDF
    We propose a physics-informed Echo State Network (ESN) to predict the evolution of chaotic systems. Compared to conventional ESNs, the physics-informed ESNs are trained to solve supervised learning tasks while ensuring that their predictions do not violate physical laws. This is achieved by introducing an additional loss function during the training of the ESNs, which penalizes non-physical predictions without the need of any additional training data. This approach is demonstrated on a chaotic Lorenz system, where the physics-informed ESNs improve the predictability horizon by about two Lyapunov times as compared to conventional ESNs. The proposed framework shows the potential of using machine learning combined with prior physical knowledge to improve the time-accurate prediction of chaotic dynamical systems

    Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks

    Get PDF
    Recurrent neural networks (RNNs) are widely used in computational neuroscience and machine learning applications. In an RNN, each neuron computes its output as a nonlinear function of its integrated input. While the importance of RNNs, especially as models of brain processing, is undisputed, it is also widely acknowledged that the computations in standard RNN models may be an over-simplification of what real neuronal networks compute. Here, we suggest that the RNN approach may be made both neurobiologically more plausible and computationally more powerful by its fusion with Bayesian inference techniques for nonlinear dynamical systems. In this scheme, we use an RNN as a generative model of dynamic input caused by the environment, e.g. of speech or kinematics. Given this generative RNN model, we derive Bayesian update equations that can decode its output. Critically, these updates define a 'recognizing RNN' (rRNN), in which neurons compute and exchange prediction and prediction error messages. The rRNN has several desirable features that a conventional RNN does not have, for example, fast decoding of dynamic stimuli and robustness to initial conditions and noise. Furthermore, it implements a predictive coding scheme for dynamic inputs. We suggest that the Bayesian inversion of recurrent neural networks may be useful both as a model of brain function and as a machine learning tool. We illustrate the use of the rRNN by an application to the online decoding (i.e. recognition) of human kinematics

    Evolutionary connectionism: algorithmic principles underlying the evolution of biological organisation in evo-devo, evo-eco and evolutionary transitions

    Get PDF
    The mechanisms of variation, selection and inheritance, on which evolution by natural selection depends, are not fixed over evolutionary time. Current evolutionary biology is increasingly focussed on understanding how the evolution of developmental organisations modifies the distribution of phenotypic variation, the evolution of ecological relationships modifies the selective environment, and the evolution of reproductive relationships modifies the heritability of the evolutionary unit. The major transitions in evolution, in particular, involve radical changes in developmental, ecological and reproductive organisations that instantiate variation, selection and inheritance at a higher level of biological organisation. However, current evolutionary theory is poorly equipped to describe how these organisations change over evolutionary time and especially how that results in adaptive complexes at successive scales of organisation (the key problem is that evolution is self-referential, i.e. the products of evolution change the parameters of the evolutionary process). Here we first reinterpret the central open questions in these domains from a perspective that emphasises the common underlying themes. We then synthesise the findings from a developing body of work that is building a new theoretical approach to these questions by converting well-understood theory and results from models of cognitive learning. Specifically, connectionist models of memory and learning demonstrate how simple incremental mechanisms, adjusting the relationships between individually-simple components, can produce organisations that exhibit complex system-level behaviours and improve the adaptive capabilities of the system. We use the term “evolutionary connectionism” to recognise that, by functionally equivalent processes, natural selection acting on the relationships within and between evolutionary entities can result in organisations that produce complex system-level behaviours in evolutionary systems and modify the adaptive capabilities of natural selection over time. We review the evidence supporting the functional equivalences between the domains of learning and of evolution, and discuss the potential for this to resolve conceptual problems in our understanding of the evolution of developmental, ecological and reproductive organisations and, in particular, the major evolutionary transitions
    corecore