169 research outputs found

    Optimizing source and receiver placement in multistatic sonar networks to monitor fixed targets

    Get PDF
    17 USC 105 interim-entered record; under review.The article of record as published may be found at https://doi.org/10.1016/j.ejor.2018.02.006Multistatic sonar networks consisting of non-collocated sources and receivers are a promising develop ment in sonar systems, but they present distinct mathematical challenges compared to the monostatic case in which each source is collocated with a receiver. This paper is the first to consider the optimal placement of both sources and receivers to monitor a given set of target locations. Prior publications have only considered optimal placement of one type of sensor, given a fixed placement of the other type. We first develop two integer linear programs capable of optimally placing both sources and receivers within a discrete set of locations. Although these models are capable of placing both sources and receivers to any degree of optimality desired by the user, their computation times may be unacceptably long for some applications. To address this issue, we then develop a two-step heuristic process, Adapt-LOC, that quickly selects positions for both sources and receivers, but with no guarantee of optimality. Based on this, we also create an iterative approach, Iter-LOC, which leads to a locally optimal placement of both sources and receivers, at the cost of larger computation times relative to Adapt-LOC. Finally, we perform compu tational experiments demonstrating that the newly developed algorithms constitute a powerful portfolio of tools, enabling the user to slect an appropriate level of solution quality, given the available time to perform computations. Our experiments include three real-world case studies.Office of Naval Research

    Optimizing source and receiver placement in multistatic sonar

    Get PDF
    17 USC 105 interim-entered record; under review.Multistatic sonar networks consisting of non-collocated sources and receivers are a promising development in sonar systems, but they present distinct mathematical challenges compared to the monostatic case in which each source is collocated with a receiver. This paper is the first to consider the optimal placement of both sources and receivers to monitor a given set of target locations. Prior publications have only considered optimal placement of one type of sensor, given a fixed placement of the other type. We first develop two integer linear programs capable of optimally placing both sources and receivers within a discrete set of locations. Although these models are capable of placing both sources and receivers to any degree of optimality desired by the user, their computation times may be unacceptably long for some applications. To address this issue, we then develop a two-step heuristic process, Adapt-LOC, that quickly selects positions for both sources and receivers, but with no guarantee of optimality. Based on this, we also create an iterative approach, Iter-LOC, which leads to a locally optimal placement of both sources and receivers, at the cost of larger computation times relative to Adapt-LOC. Finally, we perform computational experiments demonstrating that the newly developed algorithms constitute a powerful portfolio of tools, enabling the user to slect an appropriate level of solution quality, given the available time to perform computations. Our experiments include three real-world case studies.Dr. Craparo is funded by the Office of Naval Research

    Robustness - a challenge also for the 21st century: A review of robustness phenomena in technical, biological and social systems as well as robust approaches in engineering, computer science, operations research and decision aiding

    Get PDF
    Notions on robustness exist in many facets. They come from different disciplines and reflect different worldviews. Consequently, they contradict each other very often, which makes the term less applicable in a general context. Robustness approaches are often limited to specific problems for which they have been developed. This means, notions and definitions might reveal to be wrong if put into another domain of validity, i.e. context. A definition might be correct in a specific context but need not hold in another. Therefore, in order to be able to speak of robustness we need to specify the domain of validity, i.e. system, property and uncertainty of interest. As proofed by Ho et al. in an optimization context with finite and discrete domains, without prior knowledge about the problem there exists no solution what so ever which is more robust than any other. Similar to the results of the No Free Lunch Theorems of Optimization (NLFTs) we have to exploit the problem structure in order to make a solution more robust. This optimization problem is directly linked to a robustness/fragility tradeoff which has been observed in many contexts, e.g. 'robust, yet fragile' property of HOT (Highly Optimized Tolerance) systems. Another issue is that robustness is tightly bounded to other phenomena like complexity for which themselves exist no clear definition or theoretical framework. Consequently, this review rather tries to find common aspects within many different approaches and phenomena than to build a general theorem for robustness, which anyhow might not exist because complex phenomena often need to be described from a pluralistic view to address as many aspects of a phenomenon as possible. First, many different robustness problems have been reviewed from many different disciplines. Second, different common aspects will be discussed, in particular the relationship of functional and structural properties. This paper argues that robustness phenomena are also a challenge for the 21st century. It is a useful quality of a model or system in terms of the 'maintenance of some desired system characteristics despite fluctuations in the behaviour of its component parts or its environment' (s. [Carlson and Doyle, 2002], p. 2). We define robustness phenomena as solution with balanced tradeoffs and robust design principles and robustness measures as means to balance tradeoffs. --

    Analyzing Prospects for Quantum Advantage in Topological Data Analysis

    Full text link
    Lloyd et al. were first to demonstrate the promise of quantum algorithms for computing Betti numbers, a way to characterize topological features of data sets. Here, we propose, analyze, and optimize an improved quantum algorithm for topological data analysis (TDA) with reduced scaling, including a method for preparing Dicke states based on inequality testing, a more efficient amplitude estimation algorithm using Kaiser windows, and an optimal implementation of eigenvalue projectors based on Chebyshev polynomials. We compile our approach to a fault-tolerant gate set and estimate constant factors in the Toffoli complexity. Our analysis reveals that super-quadratic quantum speedups are only possible for this problem when targeting a multiplicative error approximation and the Betti number grows asymptotically. Further, we propose a dequantization of the quantum TDA algorithm that shows that having exponentially large dimension and Betti number are necessary, but insufficient conditions, for super-polynomial advantage. We then introduce and analyze specific problem examples which have parameters in the regime where super-polynomial advantages may be achieved, and argue that quantum circuits with tens of billions of Toffoli gates can solve seemingly classically intractable instances.Comment: 54 pages, 7 figures. Added a number of theorems and lemmas to clarify findings and also a discussion in the main text and new appendix about variants of our problems with high Betti numbers that are challenging for recent classical algorithm

    Matrix product states for critical spin chains: finite size scaling versus finite entanglement scaling

    Get PDF
    We investigate the use of matrix product states (MPS) to approximate ground states of critical quantum spin chains with periodic boundary conditions (PBC). We identify two regimes in the (N,D) parameter plane, where N is the size of the spin chain and D is the dimension of the MPS matrices. In the first regime MPS can be used to perform finite size scaling (FSS). In the complementary regime the MPS simulations show instead the clear signature of finite entanglement scaling (FES). In the thermodynamic limit (or large N limit), only MPS in the FSS regime maintain a finite overlap with the exact ground state. This observation has implications on how to correctly perform FSS with MPS, as well as on the performance of recent MPS algorithms for systems with PBC. It also gives clear evidence that critical models can actually be simulated very well with MPS by using the right scaling relations; in the appendix, we give an alternative derivation of the result of Pollmann et al. [Phys. Rev. Lett. 102, 255701 (2009)] relating the bond dimension of the MPS to an effective correlation length.Comment: 18 pages, 13 figure

    Fault-tolerance in two-dimensional topological systems

    Get PDF
    This thesis is a collection of ideas with the general goal of building, at least in the abstract, a local fault-tolerant quantum computer. The connection between quantum information and topology has proven to be an active area of research in several fields. The introduction of the toric code by Alexei Kitaev demonstrated the usefulness of topology for quantum memory and quantum computation. Many quantum codes used for quantum memory are modeled by spin systems on a lattice, with operators that extract syndrome information placed on vertices or faces of the lattice. It is natural to wonder whether the useful codes in such systems can be classified. This thesis presents work that leverages ideas from topology and graph theory to explore the space of such codes. Homological stabilizer codes are introduced and it is shown that, under a set of reasonable assumptions, any qubit homological stabilizer code is equivalent to either a toric code or a color code. Additionally, the toric code and the color code correspond to distinct classes of graphs. Many systems have been proposed as candidate quantum computers. It is very desirable to design quantum computing architectures with two-dimensional layouts and low complexity in parity-checking circuitry. Kitaev\u27s surface codes provided the first example of codes satisfying this property. They provided a new route to fault tolerance with more modest overheads and thresholds approaching 1%. The recently discovered color codes share many properties with the surface codes, such as the ability to perform syndrome extraction locally in two dimensions. Some families of color codes admit a transversal implementation of the entire Clifford group. This work investigates color codes on the 4.8.8 lattice known as triangular codes. I develop a fault-tolerant error-correction strategy for these codes in which repeated syndrome measurements on this lattice generate a three-dimensional space-time combinatorial structure. I then develop an integer program that analyzes this structure and determines the most likely set of errors consistent with the observed syndrome values. I implement this integer program to find the threshold for depolarizing noise on small versions of these triangular codes. Because the threshold for magic-state distillation is likely to be higher than this value and because logical CNOT gates can be performed by code deformation in a single block instead of between pairs of blocks, the threshold for fault-tolerant quantum memory for these codes is also the threshold for fault-tolerant quantum computation with them. Since the advent of a threshold theorem for quantum computers much has been improved upon. Thresholds have increased, architectures have become more local, and gate sets have been simplified. The overhead for magic-state distillation has been studied, but not nearly to the extent of the aforementioned topics. A method for greatly reducing this overhead, known as reusable magic states, is studied here. While examples of reusable magic states exist for Clifford gates, I give strong reasons to believe they do not exist for non-Clifford gates

    LIPIcs, Volume 244, ESA 2022, Complete Volume

    Get PDF
    LIPIcs, Volume 244, ESA 2022, Complete Volum

    Taming Big Data By Streaming

    Get PDF
    Data streams have emerged as a natural computational model for numerous applications of big data processing. In this model, algorithms are assumed to have access to a limited amount of memory and can only make a single pass (or a few passes) over the data, but need to produce sufficiently accurate answers for some objective functions on the dataset. This model captures various real-world applications and stimulates new scalable tools for solving important problems in the big data era. This dissertation focuses on the following two aspects of the streaming model. 1. Understanding the capability of the streaming model. For a vector aggregation stream, i.e., when the stream is a sequence of updates to an underlying nn-dimensional vector vv (for very large nn), we establish nearly tight space bounds on streaming algorithms of approximating functions of the form i=1ng(vi)\sum_{i=1}^n g(v_i) for nearly all functions gg of one-variable and l(v)l(v) for all symmetric norms ll. These results provide a deeper understanding of the streaming computation model. 2. Tighter upper bounds. We provide better streaming kk-median clustering algorithms in a dynamic points stream, i.e., a stream of insertion and deletion of points on a discrete Euclidean space ([Δ]d[\Delta]^d for sufficiently large Δ\Delta and dd). Our algorithms use k\cdot\poly(d \log \Delta) space/update time and maintain with high probability an approximate kk-median solution to the streaming dataset. All previous algorithms for computing an approximation for the kk-median problem over dynamic data streams required space and update time exponential in dd
    corecore