17 research outputs found

    Representations and evaluation strategies for feasibly approximable functions

    Get PDF
    A famous result due to Ko and Friedman (Theoretical Computer Science 20 (1982) 323–352) asserts that the problems of integration and maximisation of a univariate real function are computationally hard in a well-defined sense. Yet, both functionals are routinely computed at great speed in practice. We aim to resolve this apparent paradox by studying classes of functions which can be feasibly integrated and maximised, together with representations for these classes of functions which encode the information which is necessary to uniformly compute integral and maximum in polynomial time. The theoretical framework for this is the second-order complexity theory for operators in analysis which was introduced by Kawamura and Cook (ACM Transactions on Computation Theory 4(2) (2012) 5). The representations we study are based on approximation by polynomials, piecewise polynomials, and rational functions. We compare these representations with respect to polytime reducibility. We show that the representation based on approximation by piecewise polynomials is polytime equivalent to the representation based on approximation by rational functions. With this representation, all terms in a certain language, which is expressive enough to contain the maximum and integral of most functions of practical interest, can be evaluated in polynomial time. By contrast, both the representation based on polynomial approximation and the standard representation based on function evaluation, which implicitly underlies the Ko-Friedman result, require exponential time to evaluate certain terms in this language. We confirm our theoretical results by an implementation in Haskell, which provides some evidence that second-order polynomial time computability is similarly closely tied with practical feasibility as its first-order counterpart

    A Structural Approach to the Design of Domain Specific Neural Network Architectures

    Full text link
    This is a master's thesis concerning the theoretical ideas of geometric deep learning. Geometric deep learning aims to provide a structured characterization of neural network architectures, specifically focused on the ideas of invariance and equivariance of data with respect to given transformations. This thesis aims to provide a theoretical evaluation of geometric deep learning, compiling theoretical results that characterize the properties of invariant neural networks with respect to learning performance.Comment: 94 pages and 16 Figures Upload of my Master's thesis. Not peer reviewed and potentially contains error

    Fault-tolerant feature-based estimation of space debris motion and inertial properties

    Get PDF
    The exponential increase of the needs of people in the modern society and the contextual development of the space technologies have led to a significant use of the lower Earth’s orbits for placing artificial satellites. The current overpopulation of these orbits also increased the interest of the major space agencies in technologies for the removal of at least the biggest spacecraft that have reached their end-life or have failed their mission. One of the key functionalities required in a mission for removing a non-cooperative spacecraft is the assessment of its kinematics and inertial properties. In a few cases, this information can be approximated by ground observations. However, a re-assessment after the rendezvous phase is of critical importance for refining the capture strategies preventing accidents. The CADET program (CApture and DE-orbiting Technologies), funded by Regione Piemonte and led by Aviospace s.r.l., involved Politecnico di Torino in the research for solutions to the above issue. This dissertation proposes methods and algorithms for estimating the location of the center of mass, the angular rate, and the moments of inertia of a passive object. These methods require that the chaser spacecraft be capable of tracking several features of the target through passive vision sensors. Because of harsh lighting conditions in the space environment, feature-based methods should tolerate temporary failures in detecting features. The principal works on this topic do not consider this important aspect, making it a characteristic trait of the proposed methods. Compared to typical v treatments of the estimation problem, the proposed techniques do not depend solely on state observers. However, methods for recovering missing information, like compressive sampling techniques, are used for preprocessing input data to support the efficient usage of state observers. Simulation results showed accuracy properties that are comparable to those of the best-known methods already proposed in the literature. The developed algorithms were tested in the laboratory staged by Aviospace s.r.l., whose name is CADETLab. The results of the experimental tests suggested the practical applicability of such algorithms for supporting a real active removal mission

    Quantum Algorithms for Scientific Computing and Approximate Optimization

    Get PDF
    Quantum computation appears to offer significant advantages over classical computation and this has generated a tremendous interest in the field. In this thesis we study the application of quantum computers to computational problems in science and engineering, and to combinatorial optimization problems. We outline the results below. Algorithms for scientific computing require modules, i.e., building blocks, implementing elementary numerical functions that have well-controlled numerical error, are uniformly scalable and reversible, and that can be implemented efficiently. We derive quantum algorithms and circuits for computing square roots, logarithms, and arbitrary fractional powers, and derive worst-case error and cost bounds. We describe a modular approach to quantum algorithm design as a first step towards numerical standards and mathematical libraries for quantum scientific computing. A fundamental but computationally hard problem in physics is to solve the time-independent Schrödinger equation. This is accomplished by computing the eigenvalues of the corresponding Hamiltonian operator. The eigenvalues describe the different energy levels of a system. The cost of classical deterministic algorithms computing these eigenvalues grows exponentially with the number of system degrees of freedom. The number of degrees of freedom is typically proportional to the number of particles in a physical system. We show an efficient quantum algorithm for approximating a constant number of low-order eigenvalues of a Hamiltonian using a perturbation approach. We apply this algorithm to a special case of the Schrödinger equation and show that our algorithm succeeds with high probability, and has cost that scales polynomially with the number of degrees of freedom and the reciprocal of the desired accuracy. This improves and extends earlier results on quantum algorithms for estimating the ground state energy. We consider the simulation of quantum mechanical systems on a quantum computer. We show a novel divide and conquer approach for Hamiltonian simulation. Using the Hamiltonian structure, we can obtain faster simulation algorithms. Considering a sum of Hamiltonians we split them into groups, simulate each group separately, and combine the partial results. Simulation is customized to take advantage of the properties of each group, and hence yield refined bounds to the overall simulation cost. We illustrate our results using the electronic structure problem of quantum chemistry, where we obtain significantly improved cost estimates under mild assumptions. We turn to combinatorial optimization problems. An important open question is whether quantum computers provide advantages for the approximation of classically hard combinatorial problems. A promising recently proposed approach of Farhi et al. is the Quantum Approximate Optimization Algorithm (QAOA). We study the application of QAOA to the Maximum Cut problem, and derive analytic performance bounds for the lowest circuit-depth realization, for both general and special classes of graphs. Along the way, we develop a general procedure for analyzing the performance of QAOA for other problems, and show an example demonstrating the difficulty of obtaining similar results for greater depth. We show a generalization of QAOA and its application to wider classes of combinatorial optimization problems, in particular, problems with feasibility constraints. We introduce the Quantum Alternating Operator Ansatz, which utilizes more general unitary operators than the original QAOA proposal. Our framework facilitates low-resource implementations for many applications which may be particularly suitable for early quantum computers. We specify design criteria, and develop a set of results and tools for mapping diverse problems to explicit quantum circuits. We derive constructions for several important prototypical problems including Maximum Independent Set, Graph Coloring, and the Traveling Salesman problem, and show appealing resource cost estimates for their implementations

    Uncertainty in Artificial Intelligence: Proceedings of the Thirty-Fourth Conference

    Get PDF

    Computational Meta-Theory in Cognitive Science

    Get PDF

    Can the basic principles of rationality be defended rationally?

    Get PDF
    It is argued that if theories of rationality are to avoid logical difficulties (i.e., self referential inconsistencies or logical paradoxes), then these theories require a minimal dogmatism, that is, some basic presuppositions that are unjustifiable without intuitions, vicious circularity, or infinite regress. Thus, it is argued that if rationality is characterized as the search for justifications then a comprehensive rationality is in the end an impossible and self -contradictory task; while if rationality is characterized by the criticizability of any position, then a rationality that intends to be comprehensive leads to self-referential logical paradox. It is also argued that there are logical limits to the justifiability of the most basic scientific methodological rules and aims, that is, it is argued that to justify our various scientific methods and aims we are required to assume without a bona fide justification (i.e., dogmatically) some basic cognitive methods and goals. These conclusions are illustrated by exploring the theories of rationality of the Popperian tradition and the theories of scientific method of various naturalised scientific meta-methodologies, in particular, by critically examining the alleged justification of our scientific methods provided by 'evolutionary epistemology' and Laudan's normative naturalism

    Algorithmic Developments in Two-Stage Robust Scheduling

    Get PDF
    This thesis considers the modelling and solving of a range of scheduling problems, with a particular focus on the use of robust optimisation for scheduling in two-stage decision-making contexts. One key contribution of this thesis is the development of a new compact robust counterpart for the resource-constrained project scheduling problem with uncertain activity durations. Resource conflicts must be resolved under the assumption of budgeted uncertainty, but start times can be determined once the activity durations become known. This formulation is also applied to the multi-mode version of this problem. In both cases, computational results show the clear dominance of the new formulation over the prior decomposition-based state-of-the-art methods. This thesis also demonstrates the first application of the recoverable robust framework to single machine scheduling. Two variants of this problem are considered, in which a first-stage schedule is constructed subject to uncertain job processing times, but can be amended in some limited way following the realisation of these processing times. The first of these problems is considered under general polyhedral uncertainty. Key results concerning the second-stage subproblem are derived, resulting in three formulations to the full problem which are compared computationally. The second of these problems considers interval uncertainty but allows for a more general recovery action. A 2-approximation is derived and the performance of a proposed greedy algorithm is examined in a series of computational experiments. In addition to these results on two-stage robust scheduling problems, a new deterministic resource-constrained project scheduling model is developed which, for the first time, combines both generalised precedence constraints and flexible resource allocation. This model is introduced specifically for the application of scheduling the decommissioning of the Sellafield nuclear site. A genetic algorithm is proposed to solve this model, and its performance is compared against a mixedinteger programming formulation
    corecore