4,343 research outputs found

    On Integer Images of Max-plus Linear Mappings

    Get PDF
    Let us extend the pair of operations (max,+) over real numbers to matrices in the same way as in conventional linear algebra. We study integer images of max-plus linear mappings. The question whether Ax (in the max-plus algebra) is an integer vector for at least one x has been studied for some time but polynomial solution methods seem to exist only in special cases. In the terminology of combinatorial matrix theory this question reads: is it possible to add constants to the columns of a given matrix so that all row maxima are integer? This problem has been motivated by attempts to solve a class of job-scheduling problems. We present two polynomially solvable special cases aiming to move closer to a polynomial solution method in the general case

    Throughput-Distortion Computation Of Generic Matrix Multiplication: Toward A Computation Channel For Digital Signal Processing Systems

    Get PDF
    The generic matrix multiply (GEMM) function is the core element of high-performance linear algebra libraries used in many computationally-demanding digital signal processing (DSP) systems. We propose an acceleration technique for GEMM based on dynamically adjusting the imprecision (distortion) of computation. Our technique employs adaptive scalar companding and rounding to input matrix blocks followed by two forms of packing in floating-point that allow for concurrent calculation of multiple results. Since the adaptive companding process controls the increase of concurrency (via packing), the increase in processing throughput (and the corresponding increase in distortion) depends on the input data statistics. To demonstrate this, we derive the optimal throughput-distortion control framework for GEMM for the broad class of zero-mean, independent identically distributed, input sources. Our approach converts matrix multiplication in programmable processors into a computation channel: when increasing the processing throughput, the output noise (error) increases due to (i) coarser quantization and (ii) computational errors caused by exceeding the machine-precision limitations. We show that, under certain distortion in the GEMM computation, the proposed framework can significantly surpass 100% of the peak performance of a given processor. The practical benefits of our proposal are shown in a face recognition system and a multi-layer perceptron system trained for metadata learning from a large music feature database.Comment: IEEE Transactions on Signal Processing (vol. 60, 2012

    Polynomial Norms

    Get PDF
    In this paper, we study polynomial norms, i.e. norms that are the dthd^{\text{th}} root of a degree-dd homogeneous polynomial ff. We first show that a necessary and sufficient condition for f1/df^{1/d} to be a norm is for ff to be strictly convex, or equivalently, convex and positive definite. Though not all norms come from dthd^{\text{th}} roots of polynomials, we prove that any norm can be approximated arbitrarily well by a polynomial norm. We then investigate the computational problem of testing whether a form gives a polynomial norm. We show that this problem is strongly NP-hard already when the degree of the form is 4, but can always be answered by testing feasibility of a semidefinite program (of possibly large size). We further study the problem of optimizing over the set of polynomial norms using semidefinite programming. To do this, we introduce the notion of r-sos-convexity and extend a result of Reznick on sum of squares representation of positive definite forms to positive definite biforms. We conclude with some applications of polynomial norms to statistics and dynamical systems

    Deciding Conditional Termination

    Full text link
    We address the problem of conditional termination, which is that of defining the set of initial configurations from which a given program always terminates. First we define the dual set, of initial configurations from which a non-terminating execution exists, as the greatest fixpoint of the function that maps a set of states into its pre-image with respect to the transition relation. This definition allows to compute the weakest non-termination precondition if at least one of the following holds: (i) the transition relation is deterministic, (ii) the descending Kleene sequence overapproximating the greatest fixpoint converges in finitely many steps, or (iii) the transition relation is well founded. We show that this is the case for two classes of relations, namely octagonal and finite monoid affine relations. Moreover, since the closed forms of these relations can be defined in Presburger arithmetic, we obtain the decidability of the termination problem for such loops.Comment: 61 pages, 6 figures, 2 table

    From the arrow of time in Badiali's quantum approach to the dynamic meaning of Riemann's hypothesis

    Get PDF
    The novelty of the Jean Pierre Badiali last scientific works stems to a quantum approach based on both (i) a return to the notion of trajectories (Feynman paths) and (ii) an irreversibility of the quantum transitions. These iconoclastic choices find again the Hilbertian and the von Neumann algebraic point of view by dealing statistics over loops. This approach confers an external thermodynamic origin to the notion of a quantum unit of time (Rovelli Connes' thermal time). This notion, basis for quantization, appears herein as a mere criterion of parting between the quantum regime and the thermodynamic regime. The purpose of this note is to unfold the content of the last five years of scientific exchanges aiming to link in a coherent scheme the Jean Pierre's choices and works, and the works of the authors of this note based on hyperbolic geodesics and the associated role of Riemann zeta functions. While these options do not unveil any contradictions, nevertheless they give birth to an intrinsic arrow of time different from the thermal time. The question of the physical meaning of Riemann hypothesis as the basis of quantum mechanics, which was at the heart of our last exchanges, is the backbone of this note.Comment: 13 pages, 2 figure

    Development of symbolic algorithms for certain algebraic processes

    Get PDF
    This study investigates the problem of computing the exact greatest common divisor of two polynomials relative to an orthogonal basis, defined over the rational number field. The main objective of the study is to design and implement an effective and efficient symbolic algorithm for the general class of dense polynomials, given the rational number defining terms of their basis. From a general algorithm using the comrade matrix approach, the nonmodular and modular techniques are prescribed. If the coefficients of the generalized polynomials are multiprecision integers, multiprecision arithmetic will be required in the construction of the comrade matrix and the corresponding systems coefficient matrix. In addition, the application of the nonmodular elimination technique on this coefficient matrix extensively applies multiprecision rational number operations. The modular technique is employed to minimize the complexity involved in such computations. A divisor test algorithm that enables the detection of an unlucky reduction is a crucial device for an effective implementation of the modular technique. With the bound of the true solution not known a priori, the test is devised and carefully incorporated into the modular algorithm. The results illustrate that the modular algorithm illustrate its best performance for the class of relatively prime polynomials. The empirical computing time results show that the modular algorithm is markedly superior to the nonmodular algorithms in the case of sufficiently dense Legendre basis polynomials with a small GCD solution. In the case of dense Legendre basis polynomials with a big GCD solution, the modular algorithm is significantly superior to the nonmodular algorithms in higher degree polynomials. For more definitive conclusions, the computing time functions of the algorithms that are presented in this report have been worked out. Further investigations have also been suggested
    corecore