786 research outputs found

    Numerical and analytical research of the impact of decoherence on quantum circuits

    Full text link
    Three different levels of noisy quantum schemes modeling are considered: vectors, density matrices and Choi-Jamiolkowski related states. The implementations for personal computers and supercomputers are described, and the corresponding results are shown. For the level of density matrices, we present the technique of the fixed rank approximation and show some analytical estimates of the fidelity level.Comment: 11 pages, 9 figures, report for the International Symposium "Quantum Informatics-2014" (QI-2014), Zvenigorod, Moscow region, October 06-10, 201

    Reduction of co-simulation runtime through parallel processing

    Get PDF
    During the design phase of modern digital and mixed signal devices, simulations are run to determine the fitness of the proposed design. Some of these simulations can take large amounts of time, thus slowing down the time to manufacture of the system prototype. One of the typical simulations that is done is an integration simulation that simulates the hardware and software at the same time. Most simulators used in this task are monolithic simulators. Some simulators do have the ability to have external libraries and simulators interface with it, but the setup can be a tedious task. This thesis proposes, implements and evaluates a distributed simulator called PDQScS, that allows for speed up of the simulation to reduce this bottleneck in the design cycle without the tedious separation and linking by the user. Using multiple processes and SMP machines a simulation run time reduction was found

    Feasibility of a low-cost computing cluster in comparison to a high-performance computing cluster: A developing country perspective

    Get PDF
    In recent years, many organisations use high-performance computing clusters to, within a few days, perform complex simulations and calculations that otherwise would have taken years, even lifetimes, with a single computer. However, these high-performance computing clusters can be very expensive to purchase and maintain. For developing countries, these factors are viewed as barriers that will slow them in their quest to develop the necessary computing platforms to solve complex, real-world problems. From previous studies, it was unclear if an off-the-shelf personal computer (single computer) and low-cost computing clusters are feasible alternatives to high- performance computing clusters for smaller scientific problems. The aim of this study was to investigate this gap in literature since according to our knowledge, this kind of study has not been conducted before. The study made use of High Performance Linpack benchmark applications to collect quantitative data comparing the time-to-complete, operational costs and computational efficiency of a single computer, a low-cost computing cluster and a high-performance cluster. The benchmark used the HPL main algorithm and matrix sizes for the n x n dense linear system ranged from 10 000 to 60 0000. The costs of the low-cost computing cluster were kept to the minimum (USD4000.00) and the cluster was constructed using locally available computer hardware components. In this study for the cases we studied, we found that a low-cost computing cluster was a viable alternative to a high-performance cluster if the environment requires that costs be kept to a minimum. We concluded that for smaller scientific problems, both the single computer and low- cost computing cluster was better alternatives to a high-performance cluster. However, with large scientific problems and where performance and time are of more importance than costs, a high- performance cluster is still the best solution, offering the best efficiency for both theoretical energy consumption and computation

    Many-Task Computing and Blue Waters

    Full text link
    This report discusses many-task computing (MTC) generically and in the context of the proposed Blue Waters systems, which is planned to be the largest NSF-funded supercomputer when it begins production use in 2012. The aim of this report is to inform the BW project about MTC, including understanding aspects of MTC applications that can be used to characterize the domain and understanding the implications of these aspects to middleware and policies. Many MTC applications do not neatly fit the stereotypes of high-performance computing (HPC) or high-throughput computing (HTC) applications. Like HTC applications, by definition MTC applications are structured as graphs of discrete tasks, with explicit input and output dependencies forming the graph edges. However, MTC applications have significant features that distinguish them from typical HTC applications. In particular, different engineering constraints for hardware and software must be met in order to support these applications. HTC applications have traditionally run on platforms such as grids and clusters, through either workflow systems or parallel programming systems. MTC applications, in contrast, will often demand a short time to solution, may be communication intensive or data intensive, and may comprise very short tasks. Therefore, hardware and software for MTC must be engineered to support the additional communication and I/O and must minimize task dispatch overheads. The hardware of large-scale HPC systems, with its high degree of parallelism and support for intensive communication, is well suited for MTC applications. However, HPC systems often lack a dynamic resource-provisioning feature, are not ideal for task communication via the file system, and have an I/O system that is not optimized for MTC-style applications. Hence, additional software support is likely to be required to gain full benefit from the HPC hardware

    Modern computing: Vision and challenges

    Get PDF
    Over the past six decades, the computing systems field has experienced significant transformations, profoundly impacting society with transformational developments, such as the Internet and the commodification of computing. Underpinned by technological advancements, computer systems, far from being static, have been continuously evolving and adapting to cover multifaceted societal niches. This has led to new paradigms such as cloud, fog, edge computing, and the Internet of Things (IoT), which offer fresh economic and creative opportunities. Nevertheless, this rapid change poses complex research challenges, especially in maximizing potential and enhancing functionality. As such, to maintain an economical level of performance that meets ever-tighter requirements, one must understand the drivers of new model emergence and expansion, and how contemporary challenges differ from past ones. To that end, this article investigates and assesses the factors influencing the evolution of computing systems, covering established systems and architectures as well as newer developments, such as serverless computing, quantum computing, and on-device AI on edge devices. Trends emerge when one traces technological trajectory, which includes the rapid obsolescence of frameworks due to business and technical constraints, a move towards specialized systems and models, and varying approaches to centralized and decentralized control. This comprehensive review of modern computing systems looks ahead to the future of research in the field, highlighting key challenges and emerging trends, and underscoring their importance in cost-effectively driving technological progress

    HI Lightcones for LADUMA using Gadget-3 : performance profiling and application of an HPC code

    Get PDF
    Includes bibliographical references.This project concerns the investigation, performance profiling and optimisation of the high performance cosmological code, GADGET-3. This code was used to develop a synthetic field-of-view, or lightcone, for the MeerKAT telescope to replicate what it will observe when it conducts the LADUMA ultra-deep HI survey. This lightcone will assist in the planning process of the survey. The deliverables for this project are summarised as follows: * Provide an up-to-date performance evaluation and optimisation report for the cosmological simulation code GADGET-3. * Use GADGET-3 to produce an sufficiently high resolution simulation of a region of the Universe. • Develop a Python code to produce a lightcone which represents the MeerKAT telescope's field-of-view, by post-processing simulation output snapshots. * Extract relevant metadata from the simulation snapshots to provide additional insight into the simulated observation. * Produce an efficiently written and well documented software package to enable other researchers to produce synthetic lightcones
    corecore