99,484 research outputs found

    Review of Elements of Parallel Computing

    Get PDF
    As the title clearly states, this book is about parallel computing. Modern computers are no longer characterized by a single, fully sequential CPU. Instead, they have one or more multicore/manycore processors. The purpose of such parallel architectures is to enable the simultaneous execution of instructions, in order to achieve faster computations. In high performance computing, clusters of parallel processors are used to achieve PFLOPS performance, which is necessary for scientific and Big Data applications. Mastering parallel computing means having deep knowledge of parallel architectures, parallel programming models, parallel algorithms, parallel design patterns, and performance analysis and optimization techniques. The design of parallel programs requires a lot of creativity, because there is no universal recipe that allows one to achieve the best possible efficiency for any problem. The book presents the fundamental concepts of parallel computing from the point of view of the algorithmic and implementation patterns. The idea is that, while the hardware keeps changing, the same principles of parallel computing are reused. The book surveys some key algorithmic structures and programming models, together with an abstract representation of the underlying hardware. Parallel programming patterns are purposely not illustrated using the formal design patterns approach, to keep an informal and friendly presentation that is suited to novices

    Limits on Fundamental Limits to Computation

    Full text link
    An indispensable part of our lives, computing has also become essential to industries and governments. Steady improvements in computer hardware have been supported by periodic doubling of transistor densities in integrated circuits over the last fifty years. Such Moore scaling now requires increasingly heroic efforts, stimulating research in alternative hardware and stirring controversy. To help evaluate emerging technologies and enrich our understanding of integrated-circuit scaling, we review fundamental limits to computation: in manufacturing, energy, physical space, design and verification effort, and algorithms. To outline what is achievable in principle and in practice, we recall how some limits were circumvented, compare loose and tight limits. We also point out that engineering difficulties encountered by emerging technologies may indicate yet-unknown limits.Comment: 15 pages, 4 figures, 1 tabl

    Big data and the SP theory of intelligence

    Get PDF
    This article is about how the "SP theory of intelligence" and its realisation in the "SP machine" may, with advantage, be applied to the management and analysis of big data. The SP system -- introduced in the article and fully described elsewhere -- may help to overcome the problem of variety in big data: it has potential as "a universal framework for the representation and processing of diverse kinds of knowledge" (UFK), helping to reduce the diversity of formalisms and formats for knowledge and the different ways in which they are processed. It has strengths in the unsupervised learning or discovery of structure in data, in pattern recognition, in the parsing and production of natural language, in several kinds of reasoning, and more. It lends itself to the analysis of streaming data, helping to overcome the problem of velocity in big data. Central in the workings of the system is lossless compression of information: making big data smaller and reducing problems of storage and management. There is potential for substantial economies in the transmission of data, for big cuts in the use of energy in computing, for faster processing, and for smaller and lighter computers. The system provides a handle on the problem of veracity in big data, with potential to assist in the management of errors and uncertainties in data. It lends itself to the visualisation of knowledge structures and inferential processes. A high-parallel, open-source version of the SP machine would provide a means for researchers everywhere to explore what can be done with the system and to create new versions of it.Comment: Accepted for publication in IEEE Acces

    Schemes for Parallel Quantum Computation Without Local Control of Qubits

    Get PDF
    Typical quantum computing schemes require transformations (gates) to be targeted at specific elements (qubits). In many physical systems, direct targeting is difficult to achieve; an alternative is to encode local gates into globally applied transformations. Here we demonstrate the minimum physical requirements for such an approach: a one-dimensional array composed of two alternating 'types' of two-state system. Each system need be sensitive only to the net state of its nearest neighbors, i.e. the number in state 1 minus the number in state 2. Additionally, we show that all such arrays can perform quite general parallel operations. A broad range of physical systems and interactions are suitable: we highlight two potential implementations.Comment: 12 pages + 3 figures. Several small corrections mad
    • …
    corecore