191 research outputs found

    Fault Tolerant Training for Optimal Interpolative Nets

    Get PDF
    The optimal interpolative (OI) classification network is extended to include fault tolerance and make the network more robust to the loss of a neuron. The OI net has the characteristic that the training data are fit with no more neurons than necessary. Fault tolerance further reduces the number of neurons generated during the learning procedure while maintaining the generalization capabilities of the network. The learning algorithm for the fault-tolerant OI net is presented in a recursive formal, allowing for relatively short training times. A simulated fault-tolerant OI net is tested on a navigation satellite selection proble

    A Fault-Tolerant Optimal Interpolative Net

    Get PDF
    The optimal interpolative (OI) classification network is extended to include fault tolerance and make the network more robust to the loss of a neuron. The OI Net has the characteristic that the training data are fit with no more neurons than necessary. Fault tolerance further reduces the number of neurons generated during the learning procedure while maintaining the generalization capabilities of the network. The learning algorithm for the fault tolerant OI Net is presented in a recursive format, allowing for relatively short training times. A simulated fault tolerant OI Net is tested on a navigation satellite selective problem

    A Fault-Tolerant Optimal Interpolative Net

    Get PDF
    The optimal interpolative (OI) classification network is extended to include fault tolerance and make the network more robust to the loss of a neuron. The OI Net has the characteristic that the training data are fit with no more neurons than necessary. Fault tolerance further reduces the number of neurons generated during the learning procedure while maintaining the generalization capabilities of the network. The learning algorithm for the fault tolerant OI Net is presented in a recursive format, allowing for relatively short training times. A simulated fault tolerant OI Net is tested on a navigation satellite selective problem

    Fault Tolerant Training for Optimal Interpolative Nets

    Get PDF
    The optimal interpolative (OI) classification network is extended to include fault tolerance and make the network more robust to the loss of a neuron. The OI net has the characteristic that the training data are fit with no more neurons than necessary. Fault tolerance further reduces the number of neurons generated during the learning procedure while maintaining the generalization capabilities of the network. The learning algorithm for the fault-tolerant OI net is presented in a recursive formal, allowing for relatively short training times. A simulated fault-tolerant OI net is tested on a navigation satellite selection proble

    Navigation Satellite Selection Using Neural Networks

    Get PDF
    The application of neural networks to optimal satellite subset selection for navigation use is discussed. The methods presented in this paper are general enough to be applicable regardless of how many satellite signals are being processed by the receiver. The optimal satellite subset is chosen by minimizing a quantity known as Geometric Dilution of Precision (GDOP), which is given by the trace of the inverse of the measurement matrix. An artificial neural network learns the functional relationships between the entries of a measurement matrix and the eigenvalues of its inverse, and thus generates GDOP without inverting a matrix. Simulation results are given, and the computational benefit of neural network-based satellite selection is discussed

    Time series prediction using supervised learning and tools from chaos theory

    Get PDF
    A thesis submitted to the Faculty of Science and Computing, University of Luton, in partial fulfilment of the requirements for the degree of Doctor of PhilosophyIn this work methods for performing time series prediction on complex real world time series are examined. In particular series exhibiting non-linear or chaotic behaviour are selected for analysis. A range of methodologies based on Takens' embedding theorem are considered and compared with more conventional methods. A novel combination of methods for determining the optimal embedding parameters are employed and tried out with multivariate financial time series data and with a complex series derived from an experiment in biotechnology. The results show that this combination of techniques provide accurate results while improving dramatically the time required to produce predictions and analyses, and eliminating a range of parameters that had hitherto been fixed empirically. The architecture and methodology of the prediction software developed is described along with design decisions and their justification. Sensitivity analyses are employed to justify the use of this combination of methods, and comparisons are made with more conventional predictive techniques and trivial predictors showing the superiority of the results generated by the work detailed in this thesis

    Graph Priors, Optimal Transport, and Deep Learning in Biomedical Discovery

    Get PDF
    Recent advances in biomedical data collection allows the collection of massive datasets measuring thousands of features in thousands to millions of individual cells. This data has the potential to advance our understanding of biological mechanisms at a previously impossible resolution. However, there are few methods to understand data of this scale and type. While neural networks have made tremendous progress on supervised learning problems, there is still much work to be done in making them useful for discovery in data with more difficult to represent supervision. The flexibility and expressiveness of neural networks is sometimes a hindrance in these less supervised domains, as is the case when extracting knowledge from biomedical data. One type of prior knowledge that is more common in biological data comes in the form of geometric constraints. In this thesis, we aim to leverage this geometric knowledge to create scalable and interpretable models to understand this data. Encoding geometric priors into neural network and graph models allows us to characterize the models’ solutions as they relate to the fields of graph signal processing and optimal transport. These links allow us to understand and interpret this datatype. We divide this work into three sections. The first borrows concepts from graph signal processing to construct more interpretable and performant neural networks by constraining and structuring the architecture. The second borrows from the theory of optimal transport to perform anomaly detection and trajectory inference efficiently and with theoretical guarantees. The third examines how to compare distributions over an underlying manifold, which can be used to understand how different perturbations or conditions relate. For this we design an efficient approximation of optimal transport based on diffusion over a joint cell graph. Together, these works utilize our prior understanding of the data geometry to create more useful models of the data. We apply these methods to molecular graphs, images, single-cell sequencing, and health record data

    Parametric Human Movements:Learning, Synthesis, Recognition, and Tracking

    Get PDF

    Dynamic Fuzzy Rule Interpolation

    Get PDF
    • …
    corecore