10 research outputs found

    Recurrent neural networks for solving matrix algebra problems

    Get PDF
    The aim of this dissertation is the application of recurrent neural networks (RNNs) to solving some problems from a matrix algebra with particular reference to the computations of the generalized inverses as well as solving the matrix equations of constant (timeinvariant) matrices. We examine the ability to exploit the correlation between the dynamic state equations of recurrent neural networks for computing generalized inverses and integral representations of these generalized inverses. Recurrent neural networks are composed of independent parts (sub-networks). These sub-networks can work simultaneously, so parallel and distributed processing can be accomplished. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. We investigate and exploit an analogy between the scaled hyperpower family (SHPI family) of iterative methods for computing the matrix inverse and the discretization of Zhang Neural Network (ZNN) models. A class of ZNN models corresponding to the family of hyperpower iterative methods for computing the generalized inverses on the basis of the discovered analogy is defined. The Matlab Simulink implementation of the introduced ZNN models is described in the case of scaled hyperpower methods of the order 2 and 3. We present the Matlab Simulink model of a hybrid recursive neural implicit dynamics and give a simulation and comparison to the existing Zhang dynamics for real-time matrix inversion. Simulation results confirm a superior convergence of the hybrid model compared to Zhang model

    Essays on the economics of networks

    Get PDF
    Networks (collections of nodes or vertices and graphs capturing their linkages) are a common object of study across a range of fields includ- ing economics, statistics and computer science. Network analysis is often based around capturing the overall structure of the network by some reduced set of parameters. Canonically, this has focused on the notion of centrality. There are many measures of centrality, mostly based around statistical analysis of the linkages between nodes on the network. However, another common approach has been through the use of eigenfunction analysis of the centrality matrix. My the- sis focuses on eigencentrality as a property, paying particular focus to equilibrium behaviour when the network structure is fixed. This occurs when nodes are either passive, such as for web-searches or queueing models or when they represent active optimizing agents in network games. The major contribution of my thesis is in the applica- tion of relatively recent innovations in matrix derivatives to centrality measurements and equilibria within games that are function of those measurements. I present a series of new results on the stability of eigencentrality measures and provide some examples of applications to a number of real world examples

    Efficient Resource Allocation and Spectrum Utilisation in Licensed Shared Access Systems

    Get PDF

    Quantitative analysis of algorithms for compressed signal recovery

    Get PDF
    Compressed Sensing (CS) is an emerging paradigm in which signals are recovered from undersampled nonadaptive linear measurements taken at a rate proportional to the signal's true information content as opposed to its ambient dimension. The resulting problem consists in finding a sparse solution to an underdetermined system of linear equations. It has now been established, both theoretically and empirically, that certain optimization algorithms are able to solve such problems. Iterative Hard Thresholding (IHT) (Blumensath and Davies, 2007), which is the focus of this thesis, is an established CS recovery algorithm which is known to be effective in practice, both in terms of recovery performance and computational efficiency. However, theoretical analysis of IHT to date suffers from two drawbacks: state-of-the-art worst-case recovery conditions have not yet been quantified in terms of the sparsity/undersampling trade-off, and also there is a need for average-case analysis in order to understand the behaviour of the algorithm in practice. In this thesis, we present a new recovery analysis of IHT, which considers the fixed points of the algorithm. In the context of arbitrary matrices, we derive a condition guaranteeing convergence of IHT to a fixed point, and a condition guaranteeing that all fixed points are 'close' to the underlying signal. If both conditions are satisfied, signal recovery is therefore guaranteed. Next, we analyse these conditions in the case of Gaussian measurement matrices, exploiting the realistic average-case assumption that the underlying signal and measurement matrix are independent. We obtain asymptotic phase transitions in a proportional-dimensional framework, quantifying the sparsity/undersampling trade-off for which recovery is guaranteed. By generalizing the notion of xed points, we extend our analysis to the variable stepsize Normalised IHT (NIHT) (Blumensath and Davies, 2010). For both stepsize schemes, comparison with previous results within this framework shows a substantial quantitative improvement. We also extend our analysis to a related algorithm which exploits the assumption that the underlying signal exhibits tree-structured sparsity in a wavelet basis (Baraniuk et al., 2010). We obtain recovery conditions for Gaussian matrices in a simplified proportional-dimensional asymptotic, deriving bounds on the oversampling rate relative to the sparsity for which recovery is guaranteed. Our results, which are the first in the phase transition framework for tree-based CS, show a further significant improvement over results for the standard sparsity model. We also propose a dynamic programming algorithm which is guaranteed to compute an exact tree projection in low-order polynomial time

    Model-Oriented Data Analysis; Proceedings of the 3rd International Workshop in Petrodvorets, Russia, May 25-30 1992

    Get PDF
    This volume contains the majority of papers presented at the Third Model-Oriented Data Analysis Workshop/Conference (MODA3) in Petrodvorets, Russia on 25-30 May 1992. As with the previous two workshops in 1987 and 1990, the conference covers theoretical and applied statistics with a heavy emphasis on experimental design. Under these broad headings other specialised topics can be mentioned, particularly quality improvements and optimization. This proceedings volume consists of three main parts: I. Optimal Design, II. Statistical Applications, III. Stochastic Optimization. A constant theme at MODA conferences is the subject of optimal experimental design. This was well represented at MODA3 and readers will find important contributions. In recent years the model investigated under this heading have become progressively more complex and adaptive

    Selected Topics in Gravity, Field Theory and Quantum Mechanics

    Get PDF
    Quantum field theory has achieved some extraordinary successes over the past sixty years; however, it retains a set of challenging problems. It is not yet able to describe gravity in a mathematically consistent manner. CP violation remains unexplained. Grand unified theories have been eliminated by experiment, and a viable unification model has yet to replace them. Even the highly successful quantum chromodynamics, despite significant computational achievements, struggles to provide theoretical insight into the low-energy regime of quark physics, where the nature and structure of hadrons are determined. The only proposal for resolving the fine-tuning problem, low-energy supersymmetry, has been eliminated by results from the LHC. Since mathematics is the true and proper language for quantitative physical models, we expect new mathematical constructions to provide insight into physical phenomena and fresh approaches for building physical theories

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum
    corecore