418 research outputs found

    Programming multi-level quantum gates in disordered computing reservoirs via machine learning and TensorFlow

    Get PDF
    Novel machine learning computational tools open new perspectives for quantum information systems. Here we adopt the open-source programming library TensorFlow to design multi-level quantum gates including a computing reservoir represented by a random unitary matrix. In optics, the reservoir is a disordered medium or a multi-modal fiber. We show that trainable operators at the input and the readout enable one to realize multi-level gates. We study various qudit gates, including the scaling properties of the algorithms with the size of the reservoir. Despite an initial low slop learning stage, TensorFlow turns out to be an extremely versatile resource for designing gates with complex media, including different models that use spatial light modulators with quantized modulation levels.Comment: Added a new section and a new figure about implementation of the gates by a single spatial light modulator. 9 pages and 4 figure

    Adiabatic evolution on a spatial-photonic Ising machine

    Get PDF
    Combinatorial optimization problems are crucial for widespread applications but remain difficult to solve on a large scale with conventional hardware. Novel optical platforms, known as coherent or photonic Ising machines, are attracting considerable attention as accelerators on optimization tasks formulable as Ising models. Annealing is a well-known technique based on adiabatic evolution for finding optimal solutions in classical and quantum systems made by atoms, electrons, or photons. Although various Ising machines employ annealing in some form, adiabatic computing on optical settings has been only partially investigated. Here, we realize the adiabatic evolution of frustrated Ising models with 100 spins programmed by spatial light modulation. We use holographic and optical control to change the spin couplings adiabatically, and exploit experimental noise to explore the energy landscape. Annealing enhances the convergence to the Ising ground state and allows to find the problem solution with probability close to unity. Our results demonstrate a photonic scheme for combinatorial optimization in analogy with adiabatic quantum algorithms and enforced by optical vector-matrix multiplications and scalable photonic technology.Comment: 9 pages, 4 figure

    Gaussian states provide universal and versatile quantum reservoir computing

    Full text link
    We establish the potential of continuous-variable Gaussian states in performing reservoir computing with linear dynamical systems in classical and quantum regimes. Reservoir computing is a machine learning approach to time series processing. It exploits the computational power, high-dimensional state space and memory of generic complex systems to achieve its goal, giving it considerable engineering freedom compared to conventional computing or recurrent neural networks. We prove that universal reservoir computing can be achieved without nonlinear terms in the Hamiltonian or non-Gaussian resources. We find that encoding the input time series into Gaussian states is both a source and a means to tune the nonlinearity of the overall input-output map. We further show that reservoir computing can in principle be powered by quantum fluctuations, such as squeezed vacuum, instead of classical intense fields. Our results introduce a new research paradigm for quantum reservoir computing and the engineering of Gaussian quantum states, pushing both fields into a new direction.Comment: 13 pages, 4 figure

    Classification using Dopant Network Processing Units

    Get PDF

    The Environment and Interactions of Electrons in GaAs Quantum Dots

    Get PDF
    At the dawn of the twentieth century, the underpinnings of centuries-old classical physics were beginning to be unravelled by the advent of quantum mechanics. As well as fundamentally shifting the way we understand the very nature of reality, this quantum revolution has subsequently shaped and created entire fields, paving the way for previously unimaginable technology. The quintessential instance of such technology is the quantum computer, whose building blocks - quantum bits, or qubits - are premised on the uniquely quantum principles of superposition and entanglement. It is predicted that quantum computers will be capable of efficiently solving certain classically intractable problems. To build a quantum computer, it is necessary to find a system which exhibits these uniquely quantum phenomena. The success of silicon-based integrated circuits for classical computing made semiconductors an obvious architecture in which to focus experimental quantum computing efforts. The two-dimensional electron gas which forms at the interface of GaAs/AlGaAs heterostructures constitutes an ideal platform for isolating and controlling single electrons, encoding quantum information in their spin and charge states. This thesis broadly addresses three key challenges to quantum computing with GaAs qubits: scalability, particularly in the context of readout, unwanted interactions between fragile quantum states and their environment, and the facilitation of controllable, strong interactions between separated qubits as a means of generating entanglement. These significant, unavoidable challenges must be addressed in order for a future solid-state quantum computer to be viable

    Thermodynamic Computing

    Get PDF
    The hardware and software foundations laid in the first half of the 20th Century enabled the computing technologies that have transformed the world, but these foundations are now under siege. The current computing paradigm, which is the foundation of much of the current standards of living that we now enjoy, faces fundamental limitations that are evident from several perspectives. In terms of hardware, devices have become so small that we are struggling to eliminate the effects of thermodynamic fluctuations, which are unavoidable at the nanometer scale. In terms of software, our ability to imagine and program effective computational abstractions and implementations are clearly challenged in complex domains. In terms of systems, currently five percent of the power generated in the US is used to run computing systems - this astonishing figure is neither ecologically sustainable nor economically scalable. Economically, the cost of building next-generation semiconductor fabrication plants has soared past $10 billion. All of these difficulties - device scaling, software complexity, adaptability, energy consumption, and fabrication economics - indicate that the current computing paradigm has matured and that continued improvements along this path will be limited. If technological progress is to continue and corresponding social and economic benefits are to continue to accrue, computing must become much more capable, energy efficient, and affordable. We propose that progress in computing can continue under a united, physically grounded, computational paradigm centered on thermodynamics. Herein we propose a research agenda to extend these thermodynamic foundations into complex, non-equilibrium, self-organizing systems and apply them holistically to future computing systems that will harness nature's innate computational capacity. We call this type of computing "Thermodynamic Computing" or TC.Comment: A Computing Community Consortium (CCC) workshop report, 36 page

    Harnessing Evolution in-Materio as an Unconventional Computing Resource

    Get PDF
    This thesis illustrates the use and development of physical conductive analogue systems for unconventional computing using the Evolution in-Materio (EiM) paradigm. EiM uses an Evolutionary Algorithm to configure and exploit a physical material (or medium) for computation. While EiM processors show promise, fundamental questions and scaling issues remain. Additionally, their development is hindered by slow manufacturing and physical experimentation. This work addressed these issues by implementing simulated models to speed up research efforts, followed by investigations of physically implemented novel in-materio devices. Initial work leveraged simulated conductive networks as single substrate ‘monolithic’ EiM processors, performing classification by formulating the system as an optimisation problem, solved using Differential Evolution. Different material properties and algorithm parameters were isolated and investigated; which explained the capabilities of configurable parameters and showed ideal nanomaterial choice depended upon problem complexity. Subsequently, drawing from concepts in the wider Machine Learning field, several enhancements to monolithic EiM processors were proposed and investigated. These ensured more efficient use of training data, better classification decision boundary placement, an independently optimised readout layer, and a smoother search space. Finally, scalability and performance issues were addressed by constructing in-Materio Neural Networks (iM-NNs), where several EiM processors were stacked in parallel and operated as physical realisations of Hidden Layer neurons. Greater flexibility in system implementation was achieved by re-using a single physical substrate recursively as several virtual neurons, but this sacrificed faster parallelised execution. These novel iM-NNs were first implemented using Simulated in-Materio neurons, and trained for classification as Extreme Learning Machines, which were found to outperform artificial networks of a similar size. Physical iM-NN were then implemented using a Raspberry Pi, custom Hardware Interface and Lambda Diode based Physical in-Materio neurons, which were trained successfully with neuroevolution. A more complex AutoEncoder structure was then proposed and implemented physically to perform dimensionality reduction on a handwritten digits dataset, outperforming both Principal Component Analysis and artificial AutoEncoders. This work presents an approach to exploit systems with interesting physical dynamics, and leverage them as a computational resource. Such systems could become low power, high speed, unconventional computing assets in the future

    The 2020 magnetism roadmap

    Get PDF
    Following the success and relevance of the 2014 and 2017 Magnetism Roadmap articles, this 2020 Magnetism Roadmap edition takes yet another timely look at newly relevant and highly active areas in magnetism research. The overall layout of this article is unchanged, given that it has proved the most appropriate way to convey the most relevant aspects of today's magnetism research in a wide variety of sub-fields to a broad readership. A different group of experts has again been selected for this article, representing both the breadth of new research areas, and the desire to incorporate different voices and viewpoints. The latter is especially relevant for thistype of article, in which one's field of expertise has to be accommodated on two printed pages only, so that personal selection preferences are naturally rather more visible than in other types of articles. Most importantly, the very relevant advances in the field of magnetism research in recent years make the publication of yet another Magnetism Roadmap a very sensible and timely endeavour, allowing its authors and readers to take another broad-based, but concise look at the most significant developments in magnetism, their precise status, their challenges, and their anticipated future developments. While many of the contributions in this 2020 Magnetism Roadmap edition have significant associations with different aspects of magnetism, the general layout can nonetheless be classified in terms of three main themes: (i) phenomena, (ii) materials and characterization, and (iii) applications and devices. While these categories are unsurprisingly rather similar to the 2017 Roadmap, the order is different, in that the 2020 Roadmap considers phenomena first, even if their occurrences are naturally very difficult to separate from the materials exhibiting such phenomena. Nonetheless, the specifically selected topics seemed to be best displayed in the order presented here, in particular, because many of the phenomena or geometries discussed in (i) can be found or designed into a large variety of materials, so that the progression of the article embarks from more general concepts to more specific classes of materials in the selected order. Given that applications and devices are based on both phenomena and materials, it seemed most appropriate to close the article with the application and devices section (iii) once again. The 2020 Magnetism Roadmap article contains 14 sections, all of which were written by individual authors and experts, specifically addressing a subject in terms of its status, advances, challenges and perspectives in just two pages. Evidently, this two-page format limits the depth to which each subject can be described. Nonetheless, the most relevant and key aspects of each field are touched upon, which enables the Roadmap as whole to give its readership an initial overview of and outlook into a wide variety of topics and fields in a fairly condensed format. Correspondingly, the Roadmap pursues the goal of giving each reader a brief reference frame of relevant and current topics in modern applied magnetism research, even if not all sub-fields can be represented here. The first block of this 2020 Magnetism Roadmap, which is focussed on (i) phenomena, contains five contributions, which address the areas of interfacial Dzyaloshinskii-Moriya interactions, and two-dimensional and curvilinear magnetism, as well as spin-orbit torque phenomena and all optical magnetization reversal. All of these contributions describe cutting edge aspects of rather fundamental physical processes and properties, associated with new and improved magnetic materials' properties, together with potential developments in terms of future devices and technology. As such, they form part of a widening magnetism 'phenomena reservoir' for utilization in applied magnetism and related device technology. The final block (iii) of this article focuses on such applications and device-related fields in four contributions relating to currently active areas of research, which are of course utilizing magnetic phenomena to enable specific functions. These contributions highlight the role of magnetism or spintronics in the field of neuromorphic and reservoir computing, terahertz technology, and domain wall-based logic. One aspect common to all of these application-related contributions is that they are not yet being utilized in commercially available technology; it is currently still an open question, whether or not such technological applications will be magnetism-based at all in the future, or if other types of materials and phenomena will yet outperform magnetism. This last point is actually a very good indication of the vibrancy of applied magnetism research today, given that it demonstrates that magnetism research is able to venture into novel application fields, based upon its portfolio of phenomena, effects and materials. This materials portfolio in particular defines the central block (ii) of this article, with its five contributions interconnecting phenomena with devices, for which materials and the characterization of their properties is the decisive discriminator between purely academically interesting aspects and the true viability of real-life devices, because only available materials and their associated fabrication and characterization methods permit reliable technological implementation. These five contributions specifically address magnetic films and multiferroic heterostructures for the purpose of spin electronic utilization, multi-scale materials modelling, and magnetic materials design based upon machine-learning, as well as materials characterization via polarized neutron measurements. As such, these contributions illustrate the balanced relevance of research into experimental and modelling magnetic materials, as well the importance of sophisticated characterization methods that allow for an ever-more refined understanding of materials. As a combined and integrated article, this 2020 Magnetism Roadmap is intended to be a reference point for current, novel and emerging research directions in modern magnetism, just as its 2014 and 2017 predecessors have been in previous years

    Complex extreme nonlinear waves: classical and quantum theory for new computing models

    Get PDF
    The historical role of nonlinear waves in developing the science of complexity, and also their physical feature of being a widespread paradigm in optics, establishes a bridge between two diverse and fundamental fields that can open an immeasurable number of new routes. In what follows, we present our most important results on nonlinear waves in classical and quantum nonlinear optics. About classical phenomenology, we lay the groundwork for establishing one uniform theory of dispersive shock waves, and for controlling complex nonlinear regimes through simple integer topological invariants. The second quantized field theory of optical propagation in nonlinear dispersive media allows us to perform numerical simulations of quantum solitons and the quantum nonlinear box problem. The complexity of light propagation in nonlinear media is here examined from all the main points of view: extreme phenomena, recurrence, control, modulation instability, and so forth. Such an analysis has a major, significant goal: answering the question can nonlinear waves do computation? For this purpose, our study towards the realization of an all-optical computer, able to do computation by implementing machine learning algorithms, is illustrated. The first all-optical realization of the Ising machine and the theoretical foundations of the random optical machine are here reported. We believe that this treatise is a fundamental study for the application of nonlinear waves to new computational techniques, disclosing new procedures to the control of extreme waves, and to the design of new quantum sources and non-classical state generators for future quantum technologies, also giving incredible insights about all-optical reservoir computing. Can nonlinear waves do computation? Our random optical machine draws the route for a positive answer to this question, substituting the randomness either with the uncertainty of quantum noise effects on light propagation or with the arbitrariness of classical, extremely nonlinear regimes, as similarly done by random projection methods and extreme learning machines
    • …
    corecore