17 research outputs found

    Polynomial Solutions to the Matrix Equation X

    Get PDF
    Solutions are constructed for the Kalman-Yakubovich-transpose equation X−AXTB=C. The solutions are stated as a polynomial of parameters of the matrix equation. One of the polynomial solutions is expressed by the symmetric operator matrix, controllability matrix, and observability matrix. Moreover, the explicit solution is proposed when the Kalman-Yakubovich-transpose matrix equation has a unique solution. The provided approach does not require the coefficient matrices to be in canonical form. In addition, the numerical example is given to illustrate the effectiveness of the derived method. Some applications in control theory are discussed at the end of this paper

    Toward Solution of Matrix Equation X=Af(X)B+C

    Get PDF
    This paper studies the solvability, existence of unique solution, closed-form solution and numerical solution of matrix equation X=Af(X)B+CX=Af(X) B+C with f(X)=XT,f(X) =X^{\mathrm{T}}, f(X)=Xˉf(X) =\bar{X} and f(X)=XH,f(X) =X^{\mathrm{H}}, where XX is the unknown. It is proven that the solvability of these equations is equivalent to the solvability of some auxiliary standard Stein equations in the form of W=AWB+CW=\mathcal{A}W\mathcal{B}+\mathcal{C} where the dimensions of the coefficient matrices A,B\mathcal{A},\mathcal{B} and C\mathcal{C} are the same as those of the original equation. Closed-form solutions of equation X=Af(X)B+CX=Af(X) B+C can then be obtained by utilizing standard results on the standard Stein equation. On the other hand, some generalized Stein iterations and accelerated Stein iterations are proposed to obtain numerical solutions of equation equation X=Af(X)B+CX=Af(X) B+C. Necessary and sufficient conditions are established to guarantee the convergence of the iterations

    Upscaling message passing algorithms

    Get PDF
    The development of Approximate Message Passing (AMP) has become a precedent demonstrating the potential of Message Passing (MP) algorithms in solving large-scale linear inverse problems of the form y = A x + w. Not only AMP is provably convergent and Bayes-optimal, but it is also a first-order iterative method that can leverage Plug-and-Play (PnP) denoisers for recovering complex data like natural images. Unfortunately, all of these properties have been shown to hold assuming the measurement operator A is, roughly speaking, an i.i.d. random matrix, which highly limits the applicability of the algorithm. The promising extension of AMP, Vector AMP (VAMP), can handle a much broader range of A while preserving most advantages of AMP, but the algorithm requires inverting a large-scale matrix at each iteration, which makes it computationally intractable. As a result, a wide range of ideas has been proposed on upscaling VAMP while preserving its optimality and generality, and in this thesis we would like to share our contributions in this regard. The first contribution is related to developing a stable and accelerated version of Conjugate Gradient (CG) VAMP (CG-VAMP) -- the VAMP algorithm, where the matrix inversion is approximated by the CG algorithm. The originally proposed version of CG-VAMP exhibits unstable dynamics when even mildly large number of CG iterations is used and in those regimes where CG-VAMP is stable, the resulting fixed point of the algorithm might be much worse than that of VAMP. To allow CG-VAMP to use by an order more CG iterations and approximate VAMP with an almost arbitrary accuracy, we constructed a series of rigorous tools that have a negligible computational cost and that lead to stable performance of the algorithm. Additionally, we developed a combination of stopping criteria for CG that ensures efficient operation of CG-VAMP and faster time-wise convergence without sacrificing the estimation accuracy. Next, we considered an alternative way of pushing the performance of CG-VAMP closer to VAMP's and developed the warm-started CG (WS-CG) that reuses the information generated at the previous outer-loop iterations of the MP algorithm. We show that when the matrix inverse in VAMP is approximated by WS-CG, a fixed point of WS-CG VAMP (WS-CG-VAMP) is a fixed point of VAMP and, therefore, is conjectured to be Bayes-optimal. Importantly, this result is invariant with respect to the number of WS-CG iterations and the resulting algorithm can have the computational cost of AMP while being general and optimal as VAMP. We extend the tools developed for CG to WS-CG and numerically demonstrate the stability and efficiency of WS-CG-VAMP. The final contribution is the development of alternative methods for estimating the divergence of a PnP denoiser used within MP algorithms. This divergence plays a crucial role in stabilizing MP algorithms and ensuring its optimality and predictability. So far, the only suggested method for constructing an estimate of the divergence of a PnP denoiser has been the Black-Box Monte Carlo method. The main drawback of this method is that it requires executing the denoiser an additional time, which, effectively, doubles the cost of most MP algorithms. In this thesis we propose two rigorous divergence estimation methods that avoid such a problem and utilize only the information circulated in every MP algorithm

    Shuffled total least squares

    Full text link
    Linear regression with shuffled labels and with a noisy latent design matrix arises in many correspondence recovery problems. We propose a total least-squares approach to the problem of estimating the underlying true permutation and provide an upper bound to the normalized Procrustes quadratic loss of the estimator. We also provide an iterative algorithm to approximate the estimator and demonstrate its performance on simulated data

    A homotopy algorithm for digital optimal projection control GASD-HADOC

    Get PDF
    The linear-quadratic-gaussian (LQG) compensator was developed to facilitate the design of control laws for multi-input, multi-output (MIMO) systems. The compensator is computed by solving two algebraic equations for which standard closed-loop solutions exist. Unfortunately, the minimal dimension of an LQG compensator is almost always equal to the dimension of the plant and can thus often violate practical implementation constraints on controller order. This deficiency is especially highlighted when considering control-design for high-order systems such as flexible space structures. This deficiency motivated the development of techniques that enable the design of optimal controllers whose dimension is less than that of the design plant. A homotopy approach based on the optimal projection equations that characterize the necessary conditions for optimal reduced-order control. Homotopy algorithms have global convergence properties and hence do not require that the initializing reduced-order controller be close to the optimal reduced-order controller to guarantee convergence. However, the homotopy algorithm previously developed for solving the optimal projection equations has sublinear convergence properties and the convergence slows at higher authority levels and may fail. A new homotopy algorithm for synthesizing optimal reduced-order controllers for discrete-time systems is described. Unlike the previous homotopy approach, the new algorithm is a gradient-based, parameter optimization formulation and was implemented in MATLAB. The results reported may offer the foundation for a reliable approach to optimal, reduced-order controller design

    Complexity Theory

    Get PDF
    Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness, and quantum computation. Many of the developements are related to diverse mathematical fields such as algebraic geometry, combinatorial number theory, probability theory, quantum mechanics, representation theory, and the theory of error-correcting codes
    corecore