76 research outputs found

    An Axiomatic Setup for Algorithmic Homological Algebra and an Alternative Approach to Localization

    Full text link
    In this paper we develop an axiomatic setup for algorithmic homological algebra of Abelian categories. This is done by exhibiting all existential quantifiers entering the definition of an Abelian category, which for the sake of computability need to be turned into constructive ones. We do this explicitly for the often-studied example Abelian category of finitely presented modules over a so-called computable ring RR, i.e., a ring with an explicit algorithm to solve one-sided (in)homogeneous linear systems over RR. For a finitely generated maximal ideal m\mathfrak{m} in a commutative ring RR we show how solving (in)homogeneous linear systems over RmR_{\mathfrak{m}} can be reduced to solving associated systems over RR. Hence, the computability of RR implies that of RmR_{\mathfrak{m}}. As a corollary we obtain the computability of the category of finitely presented RmR_{\mathfrak{m}}-modules as an Abelian category, without the need of a Mora-like algorithm. The reduction also yields, as a by-product, a complexity estimation for the ideal membership problem over local polynomial rings. Finally, in the case of localized polynomial rings we demonstrate the computational advantage of our homologically motivated alternative approach in comparison to an existing implementation of Mora's algorithm.Comment: Fixed a typo in the proof of Lemma 4.3 spotted by Sebastian Posu

    High-order balanced multiwavelets: theory, factorization, and design

    Get PDF
    This correspondence deals with multiwavelets, which are a recent generalization of wavelets in the context of time-varying filter banks and with their applications to signal processing and especially com- pression. By their inherent structure, multiwavelets are fit for processing multichannel signals. This is the main issue in which we will be interested here. The outline of the correspondence is as follows. First, we will review material on multiwavelets and their links with multifilter banks and, especially, time-varying filter banks. Then, we will have a close look at the problems encountered when using multiwavelets in applications, and we will propose new solutions for the design of multiwavelets filter banks by introducing the so-called balanced multiwavelets

    Linear Algebra for Computing Gröbner Bases of Linear Recursive Multidimensional Sequences

    Get PDF
    International audienceSakata generalized the Berlekamp -- Massey algorithm to nn dimensions in~1988. The Berlekamp -- Massey -- Sakata (BMS)algorithm can be used for finding a Gröbner basis of a 00-dimensionalideal of relations verified by a table. We investigate this problem usinglinear algebra techniques, with motivations such as accelerating change ofbasis algorithms (FGLM) or improving their complexity.We first define and characterize multidimensional linear recursive sequencesfor 00-dimensional ideals.Under genericity assumptions, we propose a randomized preprocessing of thetable that corresponds to performing a linear change of coordinates on thepolynomials associated with the linear recurrences. This technique thenessentially reduces our problem to using the efficient 11-dimensional Berlekamp -- Massey (BM)algorithm.However, the number of probes to the table in this scheme may be elevated.We thus consider the table in the \emph{black-box} model: we assume probing thetable is expensive and we minimize the number of probes to the table in ourcomplexity model.We produce an FGLM-like algorithm for finding the relations in thetable, which lets us use linear algebra techniques. Under some additionalassumptions, we make this algorithm adaptive and reduce further the numberof table probes.This number can be estimated by counting the number of distinct elements in amulti-Hankel matrix (a multivariate generalization of Hankel matrices); we canrelate this quantity with the \emph{geometry} of the final staircase. Hence,in favorable cases such as convex ones, the complexity is essentially linear inthe size of the output. Finally, when using the \textsc{lex} ordering, we canmake use of fast structured linear algebra similarly to the Hankelinterpretation of Berlekamp -- Massey

    Joint shape and motion estimation from echo-based sensor data

    Get PDF
    2018 Fall.Includes bibliographical references.Given a set of time-series data collected from echo-based ranging sensors, we study the problem of jointly estimating the shape and motion of the target under observation when the sensor positions are also unknown. Using an approach first described by Stuff et al., we model the target as a point configuration in Euclidean space and estimate geometric invariants of the configuration. The geometric invariants allow us to estimate the target shape, from which we can estimate the motion of the target relative to the sensor position. This work will unify the various geometric- invariant based shape and motion estimation literature under a common framework, and extend that framework to include results for passive, bistatic sensor systems

    Computational Methods for Computer Vision : Minimal Solvers and Convex Relaxations

    Get PDF
    Robust fitting of geometric models is a core problem in computer vision. The most common approach is to use a hypothesize-and-test framework, such as RANSAC. In these frameworks the model is estimated from as few measurements as possible, which minimizes the risk of selecting corrupted measurements. These estimation problems are called minimal problems, and they can often be formulated as systems of polynomial equations. In this thesis we present new methods for building so-called minimal solvers or polynomial solvers, which are specialized code for solving such systems. On several minimal problems we improve on the state-of-the-art both with respect to numerical stability and execution time.In many computer vision problems low rank matrices naturally occur. The rank can serve as a measure of model complexity and typically a low rank is desired. Optimization problems containing rank penalties or constraints are in general difficult. Recently convex relaxations, such as the nuclear norm, have been used to make these problems tractable. In this thesis we present new convex relaxations for rank-based optimization which avoid drawbacks of previous approaches and provide tighter relaxations. We evaluate our methods on a number of real and synthetic datasets and show state-of-the-art results

    Localization using Distance Geometry : Minimal Solvers and Robust Methods for Sensor Network Self-Calibration

    Get PDF
    In this thesis, we focus on the problem of estimating receiver and sender node positions given some form of distance measurements between them. This kind of localization problem has several applications, e.g., global and indoor positioning, sensor network calibration, molecular conformations, data visualization, graph embedding, and robot kinematics. More concretely, this thesis makes contributions in three different areas.First, we present a method for simultaneously registering and merging maps. The merging problem occurs when multiple maps of an area have been constructed and need to be combined into a single representation. If there are no absolute references and the maps are in different coordinate systems, they also need to be registered. In the second part, we construct robust methods for sensor network self-calibration using both Time of Arrival (TOA) and Time Difference of Arrival (TDOA) measurements. One of the difficulties is that corrupt measurements, so-called outliers, are present and should be excluded from the model fitting. To achieve this, we use hypothesis-and-test frameworks together with minimal solvers, resulting in methods that are robust to noise, outliers, and missing data. Several new minimal solvers are introduced to accommodate a range of receiver and sender configurations in 2D and 3D space. These solvers are formulated as polynomial equation systems which are solvedusing methods from algebraic geometry.In the third part, we focus specifically on the problems of trilateration and multilateration, and we present a method that approximates the Maximum Likelihood (ML) estimator for different noise distributions. The proposed approach reduces to an eigendecomposition problem for which there are good solvers. This results in a method that is faster and more numerically stable than the state-of-the-art, while still being easy to implement. Furthermore, we present a robust trilateration method that incorporates a motion model. This enables the removal of outliers in the distance measurements at the same time as drift in the motion model is canceled

    Implementation and Evaluation of Algorithmic Skeletons: Parallelisation of Computer Algebra Algorithms

    Get PDF
    This thesis presents design and implementation approaches for the parallel algorithms of computer algebra. We use algorithmic skeletons and also further approaches, like data parallel arithmetic and actors. We have implemented skeletons for divide and conquer algorithms and some special parallel loops, that we call ‘repeated computation with a possibility of premature termination’. We introduce in this thesis a rational data parallel arithmetic. We focus on parallel symbolic computation algorithms, for these algorithms our arithmetic provides a generic parallelisation approach. The implementation is carried out in Eden, a parallel functional programming language based on Haskell. This choice enables us to encode both the skeletons and the programs in the same language. Moreover, it allows us to refrain from using two different languages—one for the implementation and one for the interface—for our implementation of computer algebra algorithms. Further, this thesis presents methods for evaluation and estimation of parallel execution times. We partition the parallel execution time into two components. One of them accounts for the quality of the parallelisation, we call it the ‘parallel penalty’. The other is the sequential execution time. For the estimation, we predict both components separately, using statistical methods. This enables very confident estimations, although using drastically less measurement points than other methods. We have applied both our evaluation and estimation approaches to the parallel programs presented in this thesis. We haven also used existing estimation methods. We developed divide and conquer skeletons for the implementation of fast parallel multiplication. We have implemented the Karatsuba algorithm, Strassen’s matrix multiplication algorithm and the fast Fourier transform. The latter was used to implement polynomial convolution that leads to a further fast multiplication algorithm. Specially for our implementation of Strassen algorithm we have designed and implemented a divide and conquer skeleton basing on actors. We have implemented the parallel fast Fourier transform, and not only did we use new divide and conquer skeletons, but also developed a map-and-transpose skeleton. It enables good parallelisation of the Fourier transform. The parallelisation of Karatsuba multiplication shows a very good performance. We have analysed the parallel penalty of our programs and compared it to the serial fraction—an approach, known from literature. We also performed execution time estimations of our divide and conquer programs. This thesis presents a parallel map+reduce skeleton scheme. It allows us to combine the usual parallel map skeletons, like parMap, farm, workpool, with a premature termination property. We use this to implement the so-called ‘parallel repeated computation’, a special form of a speculative parallel loop. We have implemented two probabilistic primality tests: the Rabin–Miller test and the Jacobi sum test. We parallelised both with our approach. We analysed the task distribution and stated the fitting configurations of the Jacobi sum test. We have shown formally that the Jacobi sum test can be implemented in parallel. Subsequently, we parallelised it, analysed the load balancing issues, and produced an optimisation. The latter enabled a good implementation, as verified using the parallel penalty. We have also estimated the performance of the tests for further input sizes and numbers of processing elements. Parallelisation of the Jacobi sum test and our generic parallelisation scheme for the repeated computation is our original contribution. The data parallel arithmetic was defined not only for integers, which is already known, but also for rationals. We handled the common factors of the numerator or denominator of the fraction with the modulus in a novel manner. This is required to obtain a true multiple-residue arithmetic, a novel result of our research. Using these mathematical advances, we have parallelised the determinant computation using the Gauß elimination. As always, we have performed task distribution analysis and estimation of the parallel execution time of our implementation. A similar computation in Maple emphasised the potential of our approach. Data parallel arithmetic enables parallelisation of entire classes of computer algebra algorithms. Summarising, this thesis presents and thoroughly evaluates new and existing design decisions for high-level parallelisations of computer algebra algorithms

    Essays on strategic trading

    Get PDF
    This dissertation discusses various aspects of strategic trading using both analytical modeling and numerical methods. Strategic trading, in short, encompasses models of trading, most notably models of optimal execution and portfolio selection, in which one seeks to rigorously consider various---both explicit and implicit---costs stemming from the act of trading itself. The strategic trading approach, rooted in the market microstructure literature, contrasts with many classical finance models in which markets are assumed to be frictionless and traders can, for the most part, take prices as given. Introducing trading costs to dynamic models of financial markets tend to complicate matters. First, the objectives of the traders become more nuanced since now overtrading leads to poor outcomes due to increased trading costs. Second, when trades affect prices and there are multiple traders in the market, the traders start to behave in a more calculated fashion, taking into account both their own objectives and the perceived actions of others. Acknowledging this strategic behavior is especially important when the traders are asymmetrically informed. These new features allow the models discussed to better reflect aspects real-world trading, for instance, intraday trading patterns, and enable one to ask and answer new questions, for instance, related to the interactions between different traders. To efficiently analyze the models put forth, numerical methods must be utilized. This is, as is to be expected, the price one must pay from added complexity. However, it also opens an opportunity to have a closer look at the numerical approaches themselves. This opportunity is capitalized on and various new and novel computational procedures influenced by the growing field of numerical real algebraic geometry are introduced and employed. These procedures are utilizable beyond the scope of this dissertation and enable one to sharpen the analysis of dynamic equilibrium models.Tämä väitöskirja käsittelee strategista kaupankäyntiä hyödyntäen sekä analyyttisiä että numeerisia menetelmiä. Strategisen kaupankäynnin mallit, erityisesti optimaalinen kauppojen toteutus ja portfolion valinta, pyrkivät tarkasti huomioimaan kaupankäynnistä itsestään aiheutuvat eksplisiittiset ja implisiittiset kustannukset. Tämä erottaa strategisen kaupankäynnin mallit klassisista kitkattomista malleista. Kustannusten huomioiminen rahoitusmarkkinoiden dynaamisessa tarkastelussa monimutkaistaa malleja. Ensinnäkin kaupankävijöiden tavoitteet muuttuvat hienovaraisemmiksi, koska liian aktiivinen kaupankäynti johtaa korkeisiin kaupankäyntikuluihin ja heikkoon tuottoon. Toiseksi oletus siitä, että kaupankävijöiden valitsemat toimet vaikuttavat hintoihin, johtaa pelikäyttäytymiseen silloin, kun markkinoilla on useampia kaupankävijöitä. Pelikäyttäytymisen huomioiminen on ensiarvoisen tärkeää, mikäli informaatio kaupankävijöiden kesken on asymmetristä. Näiden piirteiden johdosta tässä väitöskirjassa käsitellyt mallit mahdollistavat abstrahoitujen rahoitusmarkkinoiden aiempaa täsmällisemmän tarkastelun esimerkiksi päivänsisäisen kaupankäynnin osalta. Tämän lisäksi mallien avulla voidaan löytää vastauksia uusiin kysymyksiin, kuten esimerkiksi siihen, millaisia ovat kaupankävijöiden keskinäiset vuorovaikutussuhteet dynaamisilla markkinoilla. Monimutkaisten mallien analysointiin hyödynnetään numeerisia menetelmiä. Tämä avaa mahdollisuuden näiden menetelmien yksityiskohtaisempaan tarkasteluun, ja tätä mahdollisuutta hyödynnetään pohtimalla laskennallisia ratkaisuja tuoreesta numeerista reaalista algebrallista geometriaa hyödyntävästä näkökulmasta. Väitöskirjassa esitellyt uudet laskennalliset ratkaisut ovat laajalti hyödynnettävissä, ja niiden avulla on mahdollista terävöittää dynaamisten tasapainomallien analysointia

    Tests certifiés pour la stabilité structurelle de systèmes multidimensionnels

    Get PDF
    In this paper, we present new computer algebra based methods for testing the structural stability of n-D discrete linear systems (with n >= 2). More precisely, we show that the standard characterization of the structural stability of a multivariate rational transfer function (namely, the denominator of the transfer function does not have solutions in the unit polydisc of \C^n) is equivalent to the fact that a certain system of polynomials does not have real solutions. We then use state-of-the-art computer algebra algorithms to check this last condition, and thus the structural stability of multidimensional systems.Nous présentons dans cet article de nouvelles méthodes, basées sur des techniques de calcul formel, pour tester la stabilité structurelle de systèmes n-D linéaires et discrets (avec n > 2). Plus précisément, nous montrons dans un premier temps que la condition classique de stabilité structurelle d'une fonction de transfert rationnelle multivariée (à savoir que le dénominateur de celle-ci n'a pas de zéros à l'intérieur du polydisque unité de \C^n) est équivalente au fait que des systèmes d'équations polynomiales, obtenus via certaines transformations, n'ont pas de zéros réels. Nous utilisons ensuite des algorithmes de résolutions de systèmes algébriques pour vérifier cette dernière condition et ainsi la stabilité structurelle de systèmes multidimensionnels

    Software Engineering and Petri Nets

    Get PDF
    This booklet contains the proceedings of the Workshop on Software Engineering and Petri Nets (SEPN), held on June 26, 2000. The workshop was held in conjunction with the 21st International Conference on Application and Theory of Petri Nets (ICATPN-2000), organised by the CPN group of the Department of Computer Science, University of Aarhus, Denmark. The SEPN workshop papers are available in electronic form via the web page:http://www.daimi.au.dk/pn2000/proceeding
    corecore