703,741 research outputs found

    Discord and quantum computational resources

    Full text link
    Discordant states appear in a large number of quantum phenomena and seem to be a good indicator of divergence from classicality. While there is evidence that they are essential for a quantum algorithm to have an advantage over a classical one, their precise role is unclear. We examine the role of discord in quantum algorithms using the paradigmatic framework of `restricted distributed quantum gates' and show that manipulating discordant states using local operations has an associated cost in terms of entanglement and communication resources. Changing discord reduces the total correlations and reversible operations on discordant states usually require non-local resources. Discord alone is, however, not enough to determine the need for entanglement. A more general type of similar quantities, which we call K-discord, is introduced as a further constraint on the kinds of operations that can be performed without entanglement resources.Comment: Closer to published versio

    Trading classical and quantum computational resources

    Full text link
    We propose examples of a hybrid quantum-classical simulation where a classical computer assisted by a small quantum processor can efficiently simulate a larger quantum system. First we consider sparse quantum circuits such that each qubit participates in O(1) two-qubit gates. It is shown that any sparse circuit on n+k qubits can be simulated by sparse circuits on n qubits and a classical processing that takes time 2O(k)poly(n)2^{O(k)} poly(n). Secondly, we study Pauli-based computation (PBC) where allowed operations are non-destructive eigenvalue measurements of n-qubit Pauli operators. The computation begins by initializing each qubit in the so-called magic state. This model is known to be equivalent to the universal quantum computer. We show that any PBC on n+k qubits can be simulated by PBCs on n qubits and a classical processing that takes time 2O(k)poly(n)2^{O(k)} poly(n). Finally, we propose a purely classical algorithm that can simulate a PBC on n qubits in a time 2cnpoly(n)2^{c n} poly(n) where c≈0.94c\approx 0.94. This improves upon the brute-force simulation method which takes time 2npoly(n)2^n poly(n). Our algorithm exploits the fact that n-fold tensor products of magic states admit a low-rank decomposition into n-qubit stabilizer states.Comment: 14 pages, 4 figure

    Computational aeroelasticity challenges and resources

    Get PDF
    In the past decade, there has been much activity in the development of computational methods for the analysis of unsteady transonic aerodynamics about airfoils and wings. Significant features are illustrated which must be addressed in the treatment of computational transonic unsteady aerodynamics. The flow regimes for an aircraft on a plot of lift coefficient vs. Mach number are indicated. The sequence of events occurring in air combat maneuvers are illustrated. And further features of transonic flutter are illustrated. Also illustrated are several types of aeroelastic response which were encountered and which offer challenges for computational methods. The four cases illustrate problem areas encountered near the boundaries of aircraft envelopes, as operating condition change from high speed, low angle conditions to lower speed, higher angle conditions

    Parallel memetic algorithms for independent job scheduling in computational grids

    Get PDF
    In this chapter we present parallel implementations of Memetic Algorithms (MAs) for the problem of scheduling independent jobs in computational grids. The problem of scheduling in computational grids is known for its high demanding computational time. In this work we exploit the intrinsic parallel nature of MAs as well as the fact that computational grids offer large amount of resources, a part of which could be used to compute the efficient allocation of jobs to grid resources. The parallel models exploited in this work for MAs include both fine-grained and coarse-grained parallelization and their hybridization. The resulting schedulers have been tested through different grid scenarios generated by a grid simulator to match different possible configurations of computational grids in terms of size (number of jobs and resources) and computational characteristics of resources. All in all, the result of this work showed that Parallel MAs are very good alternatives in order to match different performance requirement on fast scheduling of jobs to grid resources.Peer ReviewedPostprint (author's final draft

    Distributed Feature Extraction Using Cloud Computing Resources

    Get PDF
    The need to expand the computational resources in a massive surveillance network is clear but traditional means of purchasing new equipment for short-term tasks every year is wasteful. In this work I will provide evidence in support of utilizing a cloud computing infrastructure to perform computationally intensive feature extraction tasks on data streams. Efficient off-loading of computational tasks to cloud resources will require a minimization of the time needed to expand the cloud resources, an efficient model of communication and a study of the interplay between the in-network computational resources and remote resources in the cloud. This report provides strong evidence that the use of cloud computing resources in a near real-time distributed sensor network surveillance system, ASAP, is feasible. A face detection web service operating on an Amazon EC2 instance is shown to provide processing of 10-15 frames per second.Umakishore Ramachandran - Faculty Mentor ; Rajnish Kumar - Committee Member/Second Reade

    Complexity-Aware Scheduling for an LDPC Encoded C-RAN Uplink

    Full text link
    Centralized Radio Access Network (C-RAN) is a new paradigm for wireless networks that centralizes the signal processing in a computing cloud, allowing commodity computational resources to be pooled. While C-RAN improves utilization and efficiency, the computational load occasionally exceeds the available resources, creating a computational outage. This paper provides a mathematical characterization of the computational outage probability for low-density parity check (LDPC) codes, a common class of error-correcting codes. For tractability, a binary erasures channel is assumed. Using the concept of density evolution, the computational demand is determined for a given ensemble of codes as a function of the erasure probability. The analysis reveals a trade-off: aggressively signaling at a high rate stresses the computing pool, while conservatively backing-off the rate can avoid computational outages. Motivated by this trade-off, an effective computationally aware scheduling algorithm is developed that balances demands for high throughput and low outage rates.Comment: Conference on Information Sciences and Systems (CISS) 2017, to appea

    Computation in Classical Mechanics

    Full text link
    There is a growing consensus that physics majors need to learn computational skills, but many departments are still devoid of computation in their physics curriculum. Some departments may lack the resources or commitment to create a dedicated course or program in computational physics. One way around this difficulty is to include computation in a standard upper-level physics course. An intermediate classical mechanics course is particularly well suited for including computation. We discuss the ways we have used computation in our classical mechanics courses, focusing on how computational work can improve students' understanding of physics as well as their computational skills. We present examples of computational problems that serve these two purposes. In addition, we provide information about resources for instructors who would like to include computation in their courses.Comment: 6 pages, 3 figures, submitted to American Journal of Physic

    Learning to Optimize Computational Resources: Frugal Training with Generalization Guarantees

    Full text link
    Algorithms typically come with tunable parameters that have a considerable impact on the computational resources they consume. Too often, practitioners must hand-tune the parameters, a tedious and error-prone task. A recent line of research provides algorithms that return nearly-optimal parameters from within a finite set. These algorithms can be used when the parameter space is infinite by providing as input a random sample of parameters. This data-independent discretization, however, might miss pockets of nearly-optimal parameters: prior research has presented scenarios where the only viable parameters lie within an arbitrarily small region. We provide an algorithm that learns a finite set of promising parameters from within an infinite set. Our algorithm can help compile a configuration portfolio, or it can be used to select the input to a configuration algorithm for finite parameter spaces. Our approach applies to any configuration problem that satisfies a simple yet ubiquitous structure: the algorithm's performance is a piecewise constant function of its parameters. Prior research has exhibited this structure in domains from integer programming to clustering

    Real-time co-ordinated resource management in a computational enviroment

    Get PDF
    Design co-ordination is an emerging engineering design management philosophy with its emphasis on timeliness and appropriateness. Furthermore, a key element of design coordination has been identified as resource management, the aim of which is to facilitate the optimised use of resources throughout a dynamic and changeable process. An approach to operational design co-ordination has been developed, which incorporates the appropriate techniques to ensure that the aim of co-ordinated resource management can be fulfilled. This approach has been realised within an agent-based software system, called the Design Coordination System (DCS), such that a computational design analysis can be managed in a coherent and co-ordinated manner. The DCS is applied to a computational analysis for turbine blade design provided by industry. The application of the DCS involves resources, i.e. workstations within a computer network, being utilised to perform the computational analysis involving the use of a suite of software tools to calculate stress and vibration characteristics of turbine blades. Furthermore, the application of the system shows that the utilisation of resources can be optimised throughout the computational design analysis despite the variable nature of the computer network
    • 

    corecore