179,316 research outputs found

    Quantum picturalism for topological cluster-state computing

    Full text link
    Topological quantum computing is a way of allowing precise quantum computations to run on noisy and imperfect hardware. One implementation uses surface codes created by forming defects in a highly-entangled cluster state. Such a method of computing is a leading candidate for large-scale quantum computing. However, there has been a lack of sufficiently powerful high-level languages to describe computing in this form without resorting to single-qubit operations, which quickly become prohibitively complex as the system size increases. In this paper we apply the category-theoretic work of Abramsky and Coecke to the topological cluster-state model of quantum computing to give a high-level graphical language that enables direct translation between quantum processes and physical patterns of measurement in a computer - a "compiler language". We give the equivalence between the graphical and topological information flows, and show the applicable rewrite algebra for this computing model. We show that this gives us a native graphical language for the design and analysis of topological quantum algorithms, and finish by discussing the possibilities for automating this process on a large scale.Comment: 18 pages, 21 figures. Published in New J. Phys. special issue on topological quantum computin

    Investigation of cluster and cluster queuing system

    Get PDF
    Cluster became main platform as parallel and distributed computing structure for high performance computing. Following the development of high performance computer architecture more and more different branches of natural science benefit fromhuge and efficient computational power. For instance bio-informatics, climate science, computational physics, computational chemistry, marine science, etc. Efficient and reliable computing powermay not only expending demand of existing high performance computing users but also attracting more and more different users. Efficiency and performance are main factors on high performance computing. Most of the high performance computer exists as computer cluster. Computer clustering is the popular and main stream of high-performance computing. Discover the efficiency of high performance computing or cluster is very interesting and never enough as it is really depending on different users. Monitoring and tuning high performance or cluster facilities are always necessary. This project focuses on high performance computer monitoring. Comparing queuing status and work load on different computing nodes on the cluster. As the power consumption is main issue nowadays, our project will also try to estimate power consumption on these special sites and also try to support our way of doing estimation.Master i nettverks- og systemadministrasjo

    Cluster state preparation using gates operating at arbitrary success probabilities

    Get PDF
    Several physical architectures allow for measurement-based quantum computing using sequential preparation of cluster states by means of probabilistic quantum gates. In such an approach, the order in which partial resources are combined to form the final cluster state turns out to be crucially important. We determine the influence of this classical decision process on the expected size of the final cluster. Extending earlier work, we consider different quantum gates operating at various probabilites of success. For finite resources, we employ a computer algebra system to obtain the provably optimal classical control strategy and derive symbolic results for the expected final size of the cluster. We identify two regimes: When the success probability of the elementary gates is high, the influence of the classical control strategy is found to be negligible. In that case, other figures of merit become more relevant. In contrast, for small probabilities of success, the choice of an appropriate strategy is crucial.Comment: 7 pages, 9 figures, contribution to special issue of New J. Phys. on "Measurement-Based Quantum Information Processing". Replaced with published versio

    Privacy-Preserving and Outsourced Multi-User k-Means Clustering

    Get PDF
    Many techniques for privacy-preserving data mining (PPDM) have been investigated over the past decade. Often, the entities involved in the data mining process are end-users or organizations with limited computing and storage resources. As a result, such entities may want to refrain from participating in the PPDM process. To overcome this issue and to take many other benefits of cloud computing, outsourcing PPDM tasks to the cloud environment has recently gained special attention. We consider the scenario where n entities outsource their databases (in encrypted format) to the cloud and ask the cloud to perform the clustering task on their combined data in a privacy-preserving manner. We term such a process as privacy-preserving and outsourced distributed clustering (PPODC). In this paper, we propose a novel and efficient solution to the PPODC problem based on k-means clustering algorithm. The main novelty of our solution lies in avoiding the secure division operations required in computing cluster centers altogether through an efficient transformation technique. Our solution builds the clusters securely in an iterative fashion and returns the final cluster centers to all entities when a pre-determined termination condition holds. The proposed solution protects data confidentiality of all the participating entities under the standard semi-honest model. To the best of our knowledge, ours is the first work to discuss and propose a comprehensive solution to the PPODC problem that incurs negligible cost on the participating entities. We theoretically estimate both the computation and communication costs of the proposed protocol and also demonstrate its practical value through experiments on a real dataset.Comment: 16 pages, 2 figures, 5 table

    Scheduling Distributed Clusters of Parallel Machines: Primal-Dual and LP-based Approximation Algorithms

    Get PDF
    The Map-Reduce computing framework rose to prominence with datasets of such size that dozens of machines on a single cluster were needed for individual jobs. As datasets approach the exabyte scale, a single job may need distributed processing not only on multiple machines, but on multiple clusters. We consider a scheduling problem to minimize weighted average completion time of n jobs on m distributed clusters of parallel machines. In keeping with the scale of the problems motivating this work, we assume that (1) each job is divided into m "subjobs" and (2) distinct subjobs of a given job may be processed concurrently. When each cluster is a single machine, this is the NP-Hard concurrent open shop problem. A clear limitation of such a model is that a serial processing assumption sidesteps the issue of how different tasks of a given subjob might be processed in parallel. Our algorithms explicitly model clusters as pools of resources and effectively overcome this issue. Under a variety of parameter settings, we develop two constant factor approximation algorithms for this problem. The first algorithm uses an LP relaxation tailored to this problem from prior work. This LP-based algorithm provides strong performance guarantees. Our second algorithm exploits a surprisingly simple mapping to the special case of one machine per cluster. This mapping-based algorithm is combinatorial and extremely fast. These are the first constant factor approximations for this problem

    Realfast: Real-Time, Commensal Fast Transient Surveys with the Very Large Array

    Full text link
    Radio interferometers have the ability to precisely localize and better characterize the properties of sources. This ability is having a powerful impact on the study of fast radio transients, where a few milliseconds of data is enough to pinpoint a source at cosmological distances. However, recording interferometric data at millisecond cadence produces a terabyte-per-hour data stream that strains networks, computing systems, and archives. This challenge mirrors that of other domains of science, where the science scope is limited by the computational architecture as much as the physical processes at play. Here, we present a solution to this problem in the context of radio transients: realfast, a commensal, fast transient search system at the Jansky Very Large Array. Realfast uses a novel architecture to distribute fast-sampled interferometric data to a 32-node, 64-GPU cluster for real-time imaging and transient detection. By detecting transients in situ, we can trigger the recording of data for those rare, brief instants when the event occurs and reduce the recorded data volume by a factor of 1000. This makes it possible to commensally search a data stream that would otherwise be impossible to record. This system will search for millisecond transients in more than 1000 hours of data per year, potentially localizing several Fast Radio Bursts, pulsars, and other sources of impulsive radio emission. We describe the science scope for realfast, the system design, expected outcomes, and ways real-time analysis can help in other fields of astrophysics.Comment: Accepted to ApJS Special Issue on Data; 11 pages, 4 figure
    • …
    corecore