229,820 research outputs found

    Model Order Reduction of Non-Linear Magnetostatic Problems Based on POD and DEI Methods

    Get PDF
    In the domain of numerical computation, Model Order Reduction approaches are more and more frequently applied in mechanics and have shown their efficiency in terms of reduction of computation time and memory storage requirements. One of these approaches, the Proper Orthogonal Decomposition (POD), can be very efficient in solving linear problems but encounters limitations in the non-linear case. In this paper, the Discret Empirical Interpolation Method coupled with the POD method is presented. This is an interesting alternative to reduce large-scale systems deriving from the discretization of non-linear magnetostatic problems coupled with an external electrical circuit

    Efficient and long-lived quantum memory with cold atoms inside a ring cavity

    Full text link
    Quantum memories are regarded as one of the fundamental building blocks of linear-optical quantum computation and long-distance quantum communication. A long standing goal to realize scalable quantum information processing is to build a long-lived and efficient quantum memory. There have been significant efforts distributed towards this goal. However, either efficient but short-lived or long-lived but inefficient quantum memories have been demonstrated so far. Here we report a high-performance quantum memory in which long lifetime and high retrieval efficiency meet for the first time. By placing a ring cavity around an atomic ensemble, employing a pair of clock states, creating a long-wavelength spin wave, and arranging the setup in the gravitational direction, we realize a quantum memory with an intrinsic spin wave to photon conversion efficiency of 73(2)% together with a storage lifetime of 3.2(1) ms. This realization provides an essential tool towards scalable linear-optical quantum information processing.Comment: 6 pages, 4 figure

    A Computationally Efficient Limited Memory CMA-ES for Large Scale Optimization

    Full text link
    We propose a computationally efficient limited memory Covariance Matrix Adaptation Evolution Strategy for large scale optimization, which we call the LM-CMA-ES. The LM-CMA-ES is a stochastic, derivative-free algorithm for numerical optimization of non-linear, non-convex optimization problems in continuous domain. Inspired by the limited memory BFGS method of Liu and Nocedal (1989), the LM-CMA-ES samples candidate solutions according to a covariance matrix reproduced from mm direction vectors selected during the optimization process. The decomposition of the covariance matrix into Cholesky factors allows to reduce the time and memory complexity of the sampling to O(mn)O(mn), where nn is the number of decision variables. When nn is large (e.g., nn > 1000), even relatively small values of mm (e.g., m=20,30m=20,30) are sufficient to efficiently solve fully non-separable problems and to reduce the overall run-time.Comment: Genetic and Evolutionary Computation Conference (GECCO'2014) (2014

    Cache-Oblivious Peeling of Random Hypergraphs

    Full text link
    The computation of a peeling order in a randomly generated hypergraph is the most time-consuming step in a number of constructions, such as perfect hashing schemes, random rr-SAT solvers, error-correcting codes, and approximate set encodings. While there exists a straightforward linear time algorithm, its poor I/O performance makes it impractical for hypergraphs whose size exceeds the available internal memory. We show how to reduce the computation of a peeling order to a small number of sequential scans and sorts, and analyze its I/O complexity in the cache-oblivious model. The resulting algorithm requires O(sort(n))O(\mathrm{sort}(n)) I/Os and O(nlogn)O(n \log n) time to peel a random hypergraph with nn edges. We experimentally evaluate the performance of our implementation of this algorithm in a real-world scenario by using the construction of minimal perfect hash functions (MPHF) as our test case: our algorithm builds a MPHF of 7.67.6 billion keys in less than 2121 hours on a single machine. The resulting data structure is both more space-efficient and faster than that obtained with the current state-of-the-art MPHF construction for large-scale key sets

    Spatial mode storage in a gradient echo memory

    Full text link
    Three-level atomic gradient echo memory (lambda-GEM) is a proposed candidate for efficient quantum storage and for linear optical quantum computation with time-bin multiplexing. In this paper we investigate the spatial multimode properties of a lambda-GEM system. Using a high-speed triggered CCD, we demonstrate the storage of complex spatial modes and images. We also present an in-principle demonstration of spatial multiplexing by showing selective recall of spatial elements of a stored spin wave. Using our measurements, we consider the effect of diffusion within the atomic vapour and investigate its role in spatial decoherence. Our measurements allow us to quantify the spatial distortion due to both diffusion and inhomogeneous control field scattering and compare these to theoretical models.Comment: 11 pages, 9 figure

    Improved Deterministic Connectivity in Massively Parallel Computation

    Get PDF
    A long line of research about connectivity in the Massively Parallel Computation model has culminated in the seminal works of Andoni et al. [FOCS\u2718] and Behnezhad et al. [FOCS\u2719]. They provide a randomized algorithm for low-space MPC with conjectured to be optimal round complexity O(log D + log log_{m/n} n) and O(m) space, for graphs on n vertices with m edges and diameter D. Surprisingly, a recent result of Coy and Czumaj [STOC\u2722] shows how to achieve the same deterministically. Unfortunately, however, their algorithm suffers from large local computation time. We present a deterministic connectivity algorithm that matches all the parameters of the randomized algorithm and, in addition, significantly reduces the local computation time to nearly linear. Our derandomization method is based on reducing the amount of randomness needed to allow for a simpler efficient search. While similar randomness reduction approaches have been used before, our result is not only strikingly simpler, but it is the first to have efficient local computation. This is why we believe it to serve as a starting point for the systematic development of computation-efficient derandomization approaches in low-memory MPC

    Arya: Nearly linear-time zero-knowledge proofs for correct program execution

    Get PDF
    There have been tremendous advances in reducing interaction, communication and verification time in zero-knowledge proofs but it remains an important challenge to make the prover efficient. We construct the first zero-knowledge proof of knowledge for the correct execution of a program on public and private inputs where the prover computation is nearly linear time. This saves a polylogarithmic factor in asymptotic performance compared to current state of the art proof systems. We use the TinyRAM model to capture general purpose processor computation. An instance consists of a TinyRAM program and public inputs. The witness consists of additional private inputs to the program. The prover can use our proof system to convince the verifier that the program terminates with the intended answer within given time and memory bounds. Our proof system has perfect completeness, statistical special honest verifier zero-knowledge, and computational knowledge soundness assuming linear-time computable collision-resistant hash functions exist. The main advantage of our new proof system is asymptotically efficient prover computation. The prover’s running time is only a superconstant factor larger than the program’s running time in an apples-to-apples comparison where the prover uses the same TinyRAM model. Our proof system is also efficient on the other performance parameters; the verifier’s running time and the communication are sublinear in the execution time of the program and we only use a log-logarithmic number of rounds

    Incremental Control Synthesis in Probabilistic Environments with Temporal Logic Constraints

    Full text link
    In this paper, we present a method for optimal control synthesis of a plant that interacts with a set of agents in a graph-like environment. The control specification is given as a temporal logic statement about some properties that hold at the vertices of the environment. The plant is assumed to be deterministic, while the agents are probabilistic Markov models. The goal is to control the plant such that the probability of satisfying a syntactically co-safe Linear Temporal Logic formula is maximized. We propose a computationally efficient incremental approach based on the fact that temporal logic verification is computationally cheaper than synthesis. We present a case-study where we compare our approach to the classical non-incremental approach in terms of computation time and memory usage.Comment: Extended version of the CDC 2012 pape
    corecore