1,432 research outputs found

    What is a quantum computer, and how do we build one?

    Full text link
    The DiVincenzo criteria for implementing a quantum computer have been seminal in focussing both experimental and theoretical research in quantum information processing. These criteria were formulated specifically for the circuit model of quantum computing. However, several new models for quantum computing (paradigms) have been proposed that do not seem to fit the criteria well. The question is therefore what are the general criteria for implementing quantum computers. To this end, a formal operational definition of a quantum computer is introduced. It is then shown that according to this definition a device is a quantum computer if it obeys the following four criteria: Any quantum computer must (1) have a quantum memory; (2) facilitate a controlled quantum evolution of the quantum memory; (3) include a method for cooling the quantum memory; and (4) provide a readout mechanism for subsets of the quantum memory. The criteria are met when the device is scalable and operates fault-tolerantly. We discuss various existing quantum computing paradigms, and how they fit within this framework. Finally, we lay out a roadmap for selecting an avenue towards building a quantum computer. This is summarized in a decision tree intended to help experimentalists determine the most natural paradigm given a particular physical implementation

    Reviewing agent-based modelling of socio-ecosystems: a methodology for the analysis of climate change adaptation and sustainability

    Get PDF
    The integrated - environmental, economic and social - analysis of climate change calls for a paradigm shift as it is fundamentally a problem of complex, bottom-up and multi-agent human behaviour. There is a growing awareness that global environmental change dynamics and the related socio-economic implications involve a degree of complexity that requires an innovative modelling of combined social and ecological systems. Climate change policy can no longer be addressed separately from a broader context of adaptation and sustainability strategies. A vast body of literature on agent-based modelling (ABM) shows its potential to couple social and environmental models, to incorporate the influence of micro-level decision making in the system dynamics and to study the emergence of collective responses to policies. However, there are few publications which concretely apply this methodology to the study of climate change related issues. The analysis of the state of the art reported in this paper supports the idea that today ABM is an appropriate methodology for the bottom-up exploration of climate policies, especially because it can take into account adaptive behaviour and heterogeneity of the system's components.Review, Agent-Based Modelling, Socio-Ecosystems, Climate Change, Adaptation, Complexity.

    Using Quantum Computers for Quantum Simulation

    Full text link
    Numerical simulation of quantum systems is crucial to further our understanding of natural phenomena. Many systems of key interest and importance, in areas such as superconducting materials and quantum chemistry, are thought to be described by models which we cannot solve with sufficient accuracy, neither analytically nor numerically with classical computers. Using a quantum computer to simulate such quantum systems has been viewed as a key application of quantum computation from the very beginning of the field in the 1980s. Moreover, useful results beyond the reach of classical computation are expected to be accessible with fewer than a hundred qubits, making quantum simulation potentially one of the earliest practical applications of quantum computers. In this paper we survey the theoretical and experimental development of quantum simulation using quantum computers, from the first ideas to the intense research efforts currently underway.Comment: 43 pages, 136 references, review article, v2 major revisions in response to referee comments, v3 significant revisions, identical to published version apart from format, ArXiv version has table of contents and references in alphabetical orde

    A Framework for Megascale Agent Based Model Simulations on Graphics Processing Units

    Get PDF
    Agent-based modeling is a technique for modeling dynamic systems from the bottom up. Individual elements of the system are represented computationally as agents. The system-level behaviors emerge from the micro-level interactions of the agents. Contemporary state-of-the-art agent-based modeling toolkits are essentially discrete-event simulators designed to execute serially on the Central Processing Unit (CPU). They simulate Agent-Based Models (ABMs) by executing agent actions one at a time. In addition to imposing an un-natural execution order, these toolkits have limited scalability. In this article, we investigate data-parallel computer architectures such as Graphics Processing Units (GPUs) to simulate large scale ABMs. We have developed a series of efficient, data parallel algorithms for handling environment updates, various agent interactions, agent death and replication, and gathering statistics. We present three fundamental innovations that provide unprecedented scalability. The first is a novel stochastic memory allocator which enables parallel agent replication in O(1) average time. The second is a technique for resolving precedence constraints for agent actions in parallel. The third is a method that uses specialized graphics hardware, to gather and process statistical measures. These techniques have been implemented on a modern day GPU resulting in a substantial performance increase. We believe that our system is the first ever completely GPU based agent simulation framework. Although GPUs are the focus of our current implementations, our techniques can easily be adapted to other data-parallel architectures. We have benchmarked our framework against contemporary toolkits using two popular ABMs, namely, SugarScape and StupidModel.GPGPU, Agent Based Modeling, Data Parallel Algorithms, Stochastic Simulations

    On the development of slime mould morphological, intracellular and heterotic computing devices

    Get PDF
    The use of live biological substrates in the fabrication of unconventional computing (UC) devices is steadily transcending the barriers between science fiction and reality, but efforts in this direction are impeded by ethical considerations, the field’s restrictively broad multidisciplinarity and our incomplete knowledge of fundamental biological processes. As such, very few functional prototypes of biological UC devices have been produced to date. This thesis aims to demonstrate the computational polymorphism and polyfunctionality of a chosen biological substrate — slime mould Physarum polycephalum, an arguably ‘simple’ single-celled organism — and how these properties can be harnessed to create laboratory experimental prototypes of functionally-useful biological UC prototypes. Computing devices utilising live slime mould as their key constituent element can be developed into a) heterotic, or hybrid devices, which are based on electrical recognition of slime mould behaviour via machine-organism interfaces, b) whole-organism-scale morphological processors, whose output is the organism’s morphological adaptation to environmental stimuli (input) and c) intracellular processors wherein data are represented by energetic signalling events mediated by the cytoskeleton, a nano-scale protein network. It is demonstrated that each category of device is capable of implementing logic and furthermore, specific applications for each class may be engineered, such as image processing applications for morphological processors and biosensors in the case of heterotic devices. The results presented are supported by a range of computer modelling experiments using cellular automata and multi-agent modelling. We conclude that P. polycephalum is a polymorphic UC substrate insofar as it can process multimodal sensory input and polyfunctional in its demonstrable ability to undertake a variety of computing problems. Furthermore, our results are highly applicable to the study of other living UC substrates and will inform future work in UC, biosensing, and biomedicine

    On the computing potential of intracellular vesicles

    Get PDF
    © 2015 Mayne, Adamatzky. Collision-based computing (CBC) is a form of unconventional computing in which travelling localisations represent data and conditional routing of signals determines the output state; collisions between localisations represent logical operations. We investigated patterns of Ca2+-containing vesicle distribution within a live organism, slime mould Physarum polycephalum, with confocal microscopy and observed them colliding regularly. Vesicles travel down cytoskeletal 'circuitry' and their collisions may result in reflection, fusion or annihilation. We demonstrate through experimental observations that naturally-occurring vesicle dynamics may be characterised as a computationally-universal set of Boolean logical operations and present a 'vesicle modification' of the archetypal CBC 'billiard ball model' of computation. We proceed to discuss the viability of intracellular vesicles as an unconventional computing substrate in which we delineate practical considerations for reliable vesicle 'programming' in both in vivo and in vitro vesicle computing architectures and present optimised designs for both single logical gates and combinatorial logic circuits based on cytoskeletal network conformations. The results presented here demonstrate the first characterisation of intracelluar phenomena as collision-based computing and hence the viability of biological substrates for computing

    Cellular automata modelling of slime mould actin network signalling

    Get PDF
    © 2016, The Author(s). Actin is a cytoskeletal protein which forms dense, highly interconnected networks within eukaryotic cells. A growing body of evidence suggests that actin-mediated intra- and extracellular signalling is instrumental in facilitating organism-level emergent behaviour patterns which, crucially, may be characterised as natural expressions of computation. We use excitable cellular automata modelling to simulate signal transmission through cell arrays whose topology was extracted from images of Watershed transformation-derived actin network reconstructions; the actin networks sampled were from laboratory experimental observations of a model organism, slime mould Physarum polycephalum. Our results indicate that actin networks support directional transmission of generalised energetic phenomena, the amplification and trans-network speed of which of which is proportional to network density (whose primary determinant is the anatomical location of the network sampled). Furthermore, this model also suggests the ability of such networks for supporting signal-signal interactions which may be characterised as Boolean logical operations, thus indicating that a cell’s actin network may function as a nanoscale data transmission and processing network. We conclude by discussing the role of the cytoskeleton in facilitating intracellular computing, how computation can be implemented in such a network and practical considerations for designing ‘useful’ actin circuitry

    Integrating a Non-Uniformly Sampled Software Retina with a Deep CNN Model

    Get PDF
    We present a biologically inspired method for pre-processing images applied to CNNs that reduces their memory requirements while increasing their invariance to scale and rotation changes. Our method is based on the mammalian retino-cortical transform: a mapping between a pseudo-randomly tessellated retina model (used to sample an input image) and a CNN. The aim of this first pilot study is to demonstrate a functional retinaintegrated CNN implementation and this produced the following results: a network using the full retino-cortical transform yielded an F1 score of 0.80 on a test set during a 4-way classification task, while an identical network not using the proposed method yielded an F1 score of 0.86 on the same task. The method reduced the visual data by e×7, the input data to the CNN by 40% and the number of CNN training epochs by 64%. These results demonstrate the viability of our method and hint at the potential of exploiting functional traits of natural vision systems in CNNs

    Active Self-Assembly of Algorithmic Shapes and Patterns in Polylogarithmic Time

    Get PDF
    We describe a computational model for studying the complexity of self-assembled structures with active molecular components. Our model captures notions of growth and movement ubiquitous in biological systems. The model is inspired by biology's fantastic ability to assemble biomolecules that form systems with complicated structure and dynamics, from molecular motors that walk on rigid tracks and proteins that dynamically alter the structure of the cell during mitosis, to embryonic development where large-scale complicated organisms efficiently grow from a single cell. Using this active self-assembly model, we show how to efficiently self-assemble shapes and patterns from simple monomers. For example, we show how to grow a line of monomers in time and number of monomer states that is merely logarithmic in the length of the line. Our main results show how to grow arbitrary connected two-dimensional geometric shapes and patterns in expected time that is polylogarithmic in the size of the shape, plus roughly the time required to run a Turing machine deciding whether or not a given pixel is in the shape. We do this while keeping the number of monomer types logarithmic in shape size, plus those monomers required by the Kolmogorov complexity of the shape or pattern. This work thus highlights the efficiency advantages of active self-assembly over passive self-assembly and motivates experimental effort to construct general-purpose active molecular self-assembly systems

    Applications, tools and techniques on the road to exascale computing

    Get PDF
    This volume of the book series “Advances in Parallel Computing” contains the proceedings of ParCo2011, the 14th biennial ParCo Conference, held from 31 August to 3 September 2011, in Ghent, Belgium. In an era when physical limitations have slowed down advances in the performance of single processing units, and new scientific challenges require exascale speed, parallel processing has gained momentum as a key gateway to HPC (High Performance Computing). Historically, the ParCo conferences have focused on three main themes: Algorithms, Architectures (both hardware and software) and Applications. Nowadays, the scenery has changed from traditional multiprocessor topologies to heterogeneous manycores, incorporating standard CPUs, GPUs (Graphics Processing Units) and FPGAs (Field Programmable Gate Arrays). These platforms are, at a higher abstraction level, integrated in clusters, grids, and clouds. This is reflected in the papers presented at the conference and the contributions as included in these proceedings. An increasing number of new algorithms are optimized for heterogeneous platforms and performance tuning is targeting extreme scale computing. Heterogeneous platforms utilising the compute power and energy efficiency of GPGPUs (General Purpose GPUs) are clearly becoming mainstream HPC systems for a large number of applications in a wide spectrum of application areas. These systems excel in areas such as complex system simulation, real-time image processing and visualisation, etc. High performance computing accelerators may well become the cornerstone of exascale computing applications such as 3-D turbulent combustion flows, nuclear energy simulations, brain research, financial and geophysical modelling. The exploration of new architectures, programming tools and techniques was evidenced by the mini-symposia “Parallel Computing with FPGAs” and “Exascale Programming Models”. The need for exascale hardware and software was also stressed in the industrial session, with contributions from Cray and the European exascale software initiative. Our sincere appreciation goes to the keynote speakers who gave their perspectives on the impact of parallel computing today and the road to exascale computing tomorrow. Our heartfelt thanks go to the authors for their valuable scientific contributions and to the programme committee who reviewed the papers and provided constructive remarks. The international audience was inspired by the quality of the presentations. The attendance and interaction was high and the conference has been an agora where many fruitful ideas were exchanged and explored. We wish to express our sincere thanks to the organizers for the smooth operation of the conference. The University conference centre Het Pand offered an excellent environment for the conference as it allowed delegates to interact informally and easily. A special word of thanks is due to the management and support staff of Het Pand for their proficient and friendly support. The organizers managed to put together an extensive social programme. This included a reception at the medieval Town Hall of Ghent as well as a memorable conference dinner. These social events stimulated interaction amongst delegates and resulted in many new contacts being made. Finally we wish to thank all the many supporters who assisted in the organization and successful running of the event. Erik D'Hollander, Ghent University, Belgium Koen De Bosschere, Ghent University, Belgium Gerhard R. Joubert, TU Clausthal, Germany David Padua, University of Illinois, USA Frans Peters, Philips Research, Netherland
    corecore