33 research outputs found

    Linear prediction of stationary vector sequences

    Get PDF
    The class of all linear predictors of minimal order for a stationary vector-valued process is specified in terms of linear transformations on the associated Hankel covariance matrix. Two particular transformations, yielding computationally efficient construction schemes, are proposed

    Ground-state coding in partially connected neural networks

    Get PDF
    Patterns over (-1,0,1) define, by their outer products, partially connected neural networks, consisting of internally strongly connected, externally weakly connected subnetworks. The connectivity patterns may have highly organized structures, such as lattices and fractal trees or nests. Subpatterns over (-1,1) define the subcodes stored in the subnetwork, that agree in their common bits. It is first shown that the code words are locally stable stares of the network, provided that each of the subcodes consists of mutually orthogonal words or of, at most, two words. Then it is shown that if each of the subcodes consists of two orthogonal words, the code words are the unique ground states (absolute minima) of the Hamiltonian associated with the network. The regions of attraction associated with the code words are shown to grow with the number of subnetworks sharing each of the neurons. Depending on the particular network architecture, the code sizes of partially connected networks can be vastly greater than those of fully connected ones and their error correction capabilities can be significantly greater than those of the disconnected subnetworks. The codes associated with lattice-structured and hierarchical networks are discussed in some detail

    Polynomial compensation, inversion, and approximation of discrete time linear systems

    Get PDF
    The least-squares transformation of a discrete-time multivariable linear system into a desired one by convolving the first with a polynomial system yields optimal polynomial solutions to the problems of system compensation, inversion, and approximation. The polynomial coefficients are obtained from the solution to a so-called normal linear matrix equation, whose coefficients are shown to be the weighting patterns of certain linear systems. These, in turn, can be used in the recursive solution of the normal equation

    Estimability and regulability of linear systems

    Get PDF
    A linear state-space system will be said to be estimable if in estimating its state from its output the posterior error covariance matrix is strictly smaller than the prior covariance matrix. It will be said to be regulable if the quadratic cost of state feedback control is strictly smaller than the cost when no feedback is used. These properties, which are shown to be dual, are different from the well known observability and controllability properties of linear systems. Necessary and sufficient conditions for estimability and regulability are derived for time variant and time invariant systems, in discrete and continuous time

    Recursive inversion of externally defined linear systems

    Get PDF
    The approximate inversion of an internally unknown linear system, given by its impulse response sequence, by an inverse system having a finite impulse response, is considered. The recursive least squares procedure is shown to have an exact initialization, based on the triangular Toeplitz structure of the matrix involved. The proposed approach also suggests solutions to the problems of system identification and compensation

    Obstacle detection by recognizing binary expansion patterns

    Get PDF
    This paper describes a technique for obstacle detection, based on the expansion of the image-plane projection of a textured object, as its distance from the sensor decreases. Information is conveyed by vectors whose components represent first-order temporal and spatial derivatives of the image intensity, which are related to the time to collision through the local divergence. Such vectors may be characterized as patterns corresponding to 'safe' or 'dangerous' situations. We show that essential information is conveyed by single-bit vector components, representing the signs of the relevant derivatives. We use two recently developed, high capacity classifiers, employing neural learning techniques, to recognize the imminence of collision from such patterns

    Information, consistent estimation and dynamic system identification.

    No full text
    Thesis. 1977. Ph.D.--Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science.MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.Bibliography : leaves 126-129.Ph.D

    ORTHOGONAL PATTERNS IN BINARY NEURAL NETWORKS

    Get PDF
    A binary neural network that stores only mutually orthogonal patterns is shown to converge, when probed by any pattern, to a pattern in the memory space--the space spanned by the stored patterns. The latter are shown to be the only members of the memory s ace under a certain coding condition, which allows maximal storage of M = (2N+ patterns, where N is the number of neurons. The stored patterns are shown to have basins of attraction of radius N/(2M), within which errors are corrected with probability 1 in a single update cycle. When the probe falls outside these regions, the error correction probability can still be increased to 1 by repeatedly running the network with the same probe
    corecore