290 research outputs found

    Spectral Simplicity of Apparent Complexity, Part I: The Nondiagonalizable Metadynamics of Prediction

    Full text link
    Virtually all questions that one can ask about the behavioral and structural complexity of a stochastic process reduce to a linear algebraic framing of a time evolution governed by an appropriate hidden-Markov process generator. Each type of question---correlation, predictability, predictive cost, observer synchronization, and the like---induces a distinct generator class. Answers are then functions of the class-appropriate transition dynamic. Unfortunately, these dynamics are generically nonnormal, nondiagonalizable, singular, and so on. Tractably analyzing these dynamics relies on adapting the recently introduced meromorphic functional calculus, which specifies the spectral decomposition of functions of nondiagonalizable linear operators, even when the function poles and zeros coincide with the operator's spectrum. Along the way, we establish special properties of the projection operators that demonstrate how they capture the organization of subprocesses within a complex system. Circumventing the spurious infinities of alternative calculi, this leads in the sequel, Part II, to the first closed-form expressions for complexity measures, couched either in terms of the Drazin inverse (negative-one power of a singular operator) or the eigenvalues and projection operators of the appropriate transition dynamic.Comment: 24 pages, 3 figures, 4 tables; current version always at http://csc.ucdavis.edu/~cmg/compmech/pubs/sdscpt1.ht

    Beyond the Spectral Theorem: Spectrally Decomposing Arbitrary Functions of Nondiagonalizable Operators

    Full text link
    Nonlinearities in finite dimensions can be linearized by projecting them into infinite dimensions. Unfortunately, often the linear operator techniques that one would then use simply fail since the operators cannot be diagonalized. This curse is well known. It also occurs for finite-dimensional linear operators. We circumvent it by developing a meromorphic functional calculus that can decompose arbitrary functions of nondiagonalizable linear operators in terms of their eigenvalues and projection operators. It extends the spectral theorem of normal operators to a much wider class, including circumstances in which poles and zeros of the function coincide with the operator spectrum. By allowing the direct manipulation of individual eigenspaces of nonnormal and nondiagonalizable operators, the new theory avoids spurious divergences. As such, it yields novel insights and closed-form expressions across several areas of physics in which nondiagonalizable dynamics are relevant, including memoryful stochastic processes, open non unitary quantum systems, and far-from-equilibrium thermodynamics. The technical contributions include the first full treatment of arbitrary powers of an operator. In particular, we show that the Drazin inverse, previously only defined axiomatically, can be derived as the negative-one power of singular operators within the meromorphic functional calculus and we give a general method to construct it. We provide new formulae for constructing projection operators and delineate the relations between projection operators, eigenvectors, and generalized eigenvectors. By way of illustrating its application, we explore several, rather distinct examples.Comment: 29 pages, 4 figures, expanded historical citations; http://csc.ucdavis.edu/~cmg/compmech/pubs/bst.ht

    Generalizations and Some Applications of Kronecker and Hadamard Products of Matrices

    Get PDF
    In this thesis, generalizations of Kronecker, Hadamard and usual products (sums) that depend on the partitioned of matrices are studied and defined. Namely: Tracy- Singh, Khatri-Rao, box, strong Kronecker, block Kronecker, block Hadamard, restricted Khatri-Rao products (sums) which are extended the meaning of Kronecker, Hadamard and usual products (sums). The matrix convolution products, namely: matrix convolution, Kronecker convolution and Hadamard convolution products of matrices with entries in set of functions are also considered. The connections among them are derived and most useful properties are studied in order to find new applications of Tracy-Singh and Khatri-Rao products (sums). These applications are: a family of generalized inverses, a family of coupled singular matrix problems, a family of matrix inequalities and a family of geometric means. In the theory of generalized inverses of matrices and their applications, the five generalized inverses, namely Moore-Penrose, weighted Moore-Penrose, Drazin, weighted Drazin and group inverses and their expressions and properties are studied. Moreover, some new numerous matrix expressions involving these generalized inverses and weighted matrix norms of the Tracy-Singh products matrices are also derived. In addition, we establish some necessary and sufficient conditions for the reverse order law of Drazin and weighted Drazin inverses. These results play a central role in our applications and many other applications. In the field of system identification and matrix products work, we propose several algorithms for computing the solutions of the coupled matrix differential equations, coupled matrix convolution differential, coupled matrix equations, restricted coupled singular matrix equations, coupled matrix least-squares problems and weighted Least -squares problems based on idea of Kronecker (Hadamard) and Tracy-Singh(Khatri-Rao) products (sums) of matrices. The way exists which transform the coupled matrix problems and coupled matrix differential equations into forms for which solutions may be readily computed. The common vector exact solutions of these coupled are presented and, subsequently, construct a computationally - efficient solution of coupled matrix linear least-squares problems and nonhomogeneous coupled matrix differential equations. We give new applications for the representations of weighted Drazin, Drazin and Moore-Penrose inverses of Kronecker products to the solutions of restricted singular matrix and coupled matrix equations. The analysis indicates that the Kronecker (Hadamard) structure method can achieve good efficient while the Hadamard structure method achieve more efficient when the unknown matrices are diagonal. Several special cases of these systems are also considered and solved, and then we prove the existence and uniqueness of the solution of each case, which includes the well-known coupled Sylvester matrix equations. We show also that the solutions of non-homogeneous matrix differential equations can be written in convolution forms. The analysis indicates also that the algorithms can be easily to find the common exact solutions to the coupled matrix and matrix differential equations for partitioned matrices by using the connections between Tracy-Singh, Block Kronecker and Khatri -Rao products and partitioned vector row (column) and our definition which is the so-called partitioned diagonal extraction operators. Unlike Matrix algebra, which is based on matrices, analysis must deal with estimates. In other words, Inequalities lie at the core of analysis. For this reason, it’s of great importance to give bounds and inequalities involving matrices. In this situation, the results are organized in the following five ways: First, we find some extensions and generalizations of the inequalities involving Khatri-Rao products of positive (semi) definite matrices. We turn to results relating Khatri-Rao and Tracy- Singh powers and usual powers, extending and generalizing work of previous authors. Second, we derive some new attractive inequalities involving Khatri-Rao products of positive (semi) definite matrices. We remark that some known inequalities and many other new interesting inequalities can easily be found by using our approaches. Third, we study some sufficient and necessary conditions under which inequalities below become equalities. Fourth, some counter examples are considered to show that some inequalities do not hold in general case. Fifth, we find Hölder-type inequalities for Tracy-Singh and Khatri-Rao products of positive (semi) definite matrices. The results lead to inequalities involving Hadamard and Kronecker products, as a special case, which includes the well-known inequalities involving Hadamard product of matrices, for instance, Kantorovich-type inequalities and generalization of Styan's inequality. We utilize the commutativity of the Hadamard product (sum) for possible to develop and improve some interesting inequalities which do not follow simply from the work of researchers, for example, Visick's inequality. Finally, a family of geometric means for positive two definite matrices is studied; we discuss possible definitions of the geometric means of positive definite matrices. We study the geometric means of two positive definite matrices to arrive the definitions of the weighted operator means of positive definite matrices. By means of several examples, we show that there is no known definition which is completely satisfactory. We have succeeded to find many new desirable properties and connections for geometric means related to Tracy-Singh products in order to obtain new unusual estimates for the Khatri-Rao (Tracy-Singh) products of several positive definite matrices

    Essays on the economics of networks

    Get PDF
    Networks (collections of nodes or vertices and graphs capturing their linkages) are a common object of study across a range of fields includ- ing economics, statistics and computer science. Network analysis is often based around capturing the overall structure of the network by some reduced set of parameters. Canonically, this has focused on the notion of centrality. There are many measures of centrality, mostly based around statistical analysis of the linkages between nodes on the network. However, another common approach has been through the use of eigenfunction analysis of the centrality matrix. My the- sis focuses on eigencentrality as a property, paying particular focus to equilibrium behaviour when the network structure is fixed. This occurs when nodes are either passive, such as for web-searches or queueing models or when they represent active optimizing agents in network games. The major contribution of my thesis is in the applica- tion of relatively recent innovations in matrix derivatives to centrality measurements and equilibria within games that are function of those measurements. I present a series of new results on the stability of eigencentrality measures and provide some examples of applications to a number of real world examples

    On the perturbation and subproper splittings for the generalized inverse AT,S(2) of rectangular matrix A

    Get PDF
    AbstractIn this paper, the perturbation and subproper splittings for the generalized inverse AT,S(2), the unique matrix X such that XAX=X,R(X)=T and N(X)=S, are considered. We present lower and upper bounds for the perturbation of AT,S(2). Convergence of subproper splittings for computing the special solution AT,S(2)b of restricted rectangular linear system Ax=b, x∈T, are studied. For the solution AT,S(2)b we develop a characterization. Therefore, we give a unified treatment of the related problems considered in literature by Ben-Israel, Berman, Hanke, Neumann, Plemmons, etc

    Recurrent neural networks for solving matrix algebra problems

    Get PDF
    The aim of this dissertation is the application of recurrent neural networks (RNNs) to solving some problems from a matrix algebra with particular reference to the computations of the generalized inverses as well as solving the matrix equations of constant (timeinvariant) matrices. We examine the ability to exploit the correlation between the dynamic state equations of recurrent neural networks for computing generalized inverses and integral representations of these generalized inverses. Recurrent neural networks are composed of independent parts (sub-networks). These sub-networks can work simultaneously, so parallel and distributed processing can be accomplished. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. We investigate and exploit an analogy between the scaled hyperpower family (SHPI family) of iterative methods for computing the matrix inverse and the discretization of Zhang Neural Network (ZNN) models. A class of ZNN models corresponding to the family of hyperpower iterative methods for computing the generalized inverses on the basis of the discovered analogy is defined. The Matlab Simulink implementation of the introduced ZNN models is described in the case of scaled hyperpower methods of the order 2 and 3. We present the Matlab Simulink model of a hybrid recursive neural implicit dynamics and give a simulation and comparison to the existing Zhang dynamics for real-time matrix inversion. Simulation results confirm a superior convergence of the hybrid model compared to Zhang model

    A Note on Computing the Generalized Inverse A^(2)_{T,S} of a Matrix A

    Get PDF
    The generalized inverse A T,S (2) of a matrix A is a {2}-inverse of A with the prescribed range T and null space S. A representation for the generalized inverse A T,S (2) has been recently developed with the condition σ (GA| T)⊂(0,∞), where G is a matrix with R(G)=T andN(G)=S. In this note, we remove the above condition. Three types of iterative methods for A T,S (2) are presented if σ(GA|T) is a subset of the open right half-plane and they are extensions of existing computational procedures of A T,S (2), including special cases such as the weighted Moore-Penrose inverse A M,N † and the Drazin inverse AD. Numerical examples are given to illustrate our results
    corecore