140 research outputs found

    Memristors for the Curious Outsiders

    Full text link
    We present both an overview and a perspective of recent experimental advances and proposed new approaches to performing computation using memristors. A memristor is a 2-terminal passive component with a dynamic resistance depending on an internal parameter. We provide an brief historical introduction, as well as an overview over the physical mechanism that lead to memristive behavior. This review is meant to guide nonpractitioners in the field of memristive circuits and their connection to machine learning and neural computation.Comment: Perpective paper for MDPI Technologies; 43 page

    CMOS-compatible Ising and Potts Annealing Using Single Photon Avalanche Diodes

    Full text link
    Massively parallel annealing processors may offer superior performance for a wide range of sampling and optimization problems. A key component dictating the size of these processors is the neuron update circuit, ideally implemented using special stochastic nanodevices. We leverage photon statistics using single photon avalanche diodes (SPADs) and temporal filtering to generate stochastic states. This method is a powerful alternative offering unique features not currently seen in annealing processors: the ability to continuously control the computational temperature and the seamless extension to the Potts model, a nn-state generalization of the two-state Ising model. SPADs also offer a considerable practical advantage since they are readily manufacturable in current standard CMOS processes. As a first step towards realizing a CMOS SPAD-based annealer, we have designed Ising and Potts models driven by an array of discrete SPADs and show they accurately sample from their theoretical distributions

    Smart Distributed Generation System Event Classification using Recurrent Neural Network-based Long Short-term Memory

    Get PDF
    High penetration of distributed generation (DG) sources into a decentralized power system causes several disturbances, making the monitoring and operation control of the system complicated. Moreover, because of being passive, modern DG systems are unable to detect and inform about these disturbances related to power quality in an intelligent approach. This paper proposed an intelligent and novel technique, capable of making real-time decisions on the occurrence of different DG events such as islanding, capacitor switching, unsymmetrical faults, load switching, and loss of parallel feeder and distinguishing these events from the normal mode of operation. This event classification technique was designed to diagnose the distinctive pattern of the time-domain signal representing a measured electrical parameter, like the voltage, at DG point of common coupling (PCC) during such events. Then different power system events were classified into their root causes using long short-term memory (LSTM), which is a deep learning algorithm for time sequence to label classification. A total of 1100 events showcasing islanding, faults, and other DG events were generated based on the model of a smart distributed generation system using a MATLAB/Simulink environment. Classifier performance was calculated using 5-fold cross-validation. The genetic algorithm (GA) was used to determine the optimum value of classification hyper-parameters and the best combination of features. The simulation results indicated that the events were classified with high precision and specificity with ten cycles of occurrences while achieving a 99.17% validation accuracy. The performance of the proposed classification technique does not degrade with the presence of noise in test data, multiple DG sources in the model, and inclusion of motor starting event in training samples

    Operational Research and Machine Learning Applied to Transport Systems

    Get PDF
    The New Economy, environmental sustainability and global competitiveness drive inno- vations in supply chain management and transport systems. The New Economy increases the amount and types of products that can be delivered directly to homes, challenging the organisation of last-mile delivery companies. To keep up with the challenges, deliv- ery companies are continuously seeking new innovations to allow them to pack goods faster and more efficiently. Thus, the packing problem has become a crucial factor and solving this problem effectively is essential for the success of good deliveries and logistics. On land, rail transportation is known to be the most eco-friendly transport system in terms of emissions, energy consumption, land use, noise levels, and quantities of people and goods that can be moved. It is difficult to apply innovations to the rail industry due to a number of reasons: the risk aversion nature, the high level of regulations, the very high cost of infrastructure upgrades, and the natural monopoly of resources in many countries. In the UK, however, in 2018 the Department for Transport published the Joint Rail Data Action Plan, opening some rail industry datasets for researching purposes. In line with the above developments, this thesis focuses on the research of machine learning and operational research techniques in two main areas: improving packing operations for logistics and improving various operations for passenger rail. In total, the research in this thesis will make six contributions as detailed below. The first contribution is a new mathematical model and a new heuristic to solve the Multiple Heterogeneous Knapsack Problem, giving priority to smaller bins and consid- ering some important container loading constraints. This problem is interesting because many companies prefer to deal with smaller bins as they are less expensive. Moreover, giving priority to filling small bins (rather than large bins) is very important in some industries, e.g. fast-moving consumer goods. The second contribution is a novel strategy to hybridize operational research with ma- chine learning to estimate if a particular packing solution is feasible in a constant O(1) computational time. Given that traditional feasibility checking for packing solutions is an NP-Hard problem, it is expected that this strategy will significantly save time and computational effort. The third contribution is an extended mathematical model and an algorithm to apply the packing problem to improving the seat reservation system in passenger rail. The problem is formulated as the Group Seat Reservation Knapsack Problem with Price on Seat. It is an extension of the Offline Group Seat Reservation Knapsack Problem. This extension introduces a profit evaluation dependent on not only the space occupied, but also on the individual profit brought by each reserved seat. The fourth contribution is a data-driven method to infer the feasible train routing strate- gies from open data in the United Kingdom rail network. Briefly, most of the UK network is divided into sections called berths, and the transition point from one berth to another is called a berth step. There are sensors at berth steps that can detect the movement when a train passes by. The result of the method is a directed graph, the berth graph, where each node represents a berth and each arc represents a berth-step. The arcs rep- resent the feasible routing strategies, i.e. where a train can move from one berth. A connected path between two berths represents a connected section of the network. The fifth contribution is a novel method to estimate the amount of time that a train is going to spend on a berth. This chapter compares two different approaches, AutoRe- gressive Moving Average with Recurrent Neural Networks, and analyse the pros and cons of each choice with statistical analyses. The method is tested on a real-world case study, one berth that represent a busy junction in the Merseyside region. The sixth contribution is an adaptive method to forecast the running time of a train journey using the Gated Recurrent Units method. The method exploits the TD’s berth information and the berth graph. The case-study adopted in the experimental tests is the train network in the Merseyside region

    Mixed Order Hyper-Networks for Function Approximation and Optimisation

    Get PDF
    Many systems take inputs, which can be measured and sometimes controlled, and outputs, which can also be measured and which depend on the inputs. Taking numerous measurements from such systems produces data, which may be used to either model the system with the goal of predicting the output associated with a given input (function approximation, or regression) or of finding the input settings required to produce a desired output (optimisation, or search). Approximating or optimising a function is central to the field of computational intelligence. There are many existing methods for performing regression and optimisation based on samples of data but they all have limitations. Multi layer perceptrons (MLPs) are universal approximators, but they suffer from the black box problem, which means their structure and the function they implement is opaque to the user. They also suffer from a propensity to become trapped in local minima or large plateaux in the error function during learning. A regression method with a structure that allows models to be compared, human knowledge to be extracted, optimisation searches to be guided and model complexity to be controlled is desirable. This thesis presents such as method. This thesis presents a single framework for both regression and optimisation: the mixed order hyper network (MOHN). A MOHN implements a function f:{-1,1}^n ->R to arbitrary precision. The structure of a MOHN makes the ways in which input variables interact to determine the function output explicit, which allows human insights and complexity control that are very difficult in neural networks with hidden units. The explicit structure representation also allows efficient algorithms for searching for an input pattern that leads to a desired output. A number of learning rules for estimating the weights based on a sample of data are presented along with a heuristic method for choosing which connections to include in a model. Several methods for searching a MOHN for inputs that lead to a desired output are compared. Experiments compare a MOHN to an MLP on regression tasks. The MOHN is found to achieve a comparable level of accuracy to an MLP but suffers less from local minima in the error function and shows less variance across multiple training trials. It is also easier to interpret and combine from an ensemble. The trade-off between the fit of a model to its training data and that to an independent set of test data is shown to be easier to control in a MOHN than an MLP. A MOHN is also compared to a number of existing optimisation methods including those using estimation of distribution algorithms, genetic algorithms and simulated annealing. The MOHN is able to find optimal solutions in far fewer function evaluations than these methods on tasks selected from the literature

    Biologically inspired evolutionary temporal neural circuits

    Get PDF
    Biological neural networks have always motivated creation of new artificial neural networks, and in this case a new autonomous temporal neural network system. Among the more challenging problems of temporal neural networks are the design and incorporation of short and long-term memories as well as the choice of network topology and training mechanism. In general, delayed copies of network signals can form short-term memory (STM), providing a limited temporal history of events similar to FIR filters, whereas the synaptic connection strengths as well as delayed feedback loops (ER circuits) can constitute longer-term memories (LTM). This dissertation introduces a new general evolutionary temporal neural network framework (GETnet) through automatic design of arbitrary neural networks with STM and LTM. GETnet is a step towards realization of general intelligent systems that need minimum or no human intervention and can be applied to a broad range of problems. GETnet utilizes nonlinear moving average/autoregressive nodes and sub-circuits that are trained by enhanced gradient descent and evolutionary search in terms of architecture, synaptic delay, and synaptic weight spaces. The mixture of Lamarckian and Darwinian evolutionary mechanisms facilitates the Baldwin effect and speeds up the hybrid training. The ability to evolve arbitrary adaptive time-delay connections enables GETnet to find novel answers to many classification and system identification tasks expressed in the general form of desired multidimensional input and output signals. Simulations using Mackey-Glass chaotic time series and fingerprint perspiration-induced temporal variations are given to demonstrate the above stated capabilities of GETnet

    Configurable analog hardware for neuromorphic Bayesian inference and least-squares solutions

    Get PDF
    Sparse approximation is a Bayesian inference program with a wide number of signal processing applications, such as Compressed Sensing recovery used in medical imaging. Previous sparse coding implementations relied on digital algorithms whose power consumption and performance scale poorly with problem size, rendering them unsuitable for portable applications, and a bottleneck in high speed applications. A novel analog architecture, implementing the Locally Competitive Algorithm (LCA), was designed and programmed onto a Field Programmable Analog Arrays (FPAAs), using floating gate transistors to set the analog parameters. A network of 6 coefficients was demonstrated to converge to similar values as a digital sparse approximation algorithm, but with better power and performance scaling. A rate encoded spiking algorithm was then developed, which was shown to converge to similar values as the LCA. A second novel architecture was designed and programmed on an FPAA implementing the spiking version of the LCA with integrate and fire neurons. A network of 18 neurons converged on similar values as a digital sparse approximation algorithm, with even better performance and power efficiency than the non-spiking network. Novel algorithms were created to increase floating gate programming speed by more than two orders of magnitude, and reduce programming error from device mismatch. A new FPAA chip was designed and tested which allowed for rapid interfacing and additional improvements in accuracy. Finally, a neuromorphic chip was designed, containing 400 integrate and fire neurons, and capable of converging on a sparse approximation solution in 10 microseconds, over 1000 times faster than the best digital solution.Ph.D

    Autonomous Probabilistic Coprocessing with Petaflips per Second

    Full text link
    In this paper we present a concrete design for a probabilistic (p-) computer based on a network of p-bits, robust classical entities fluctuating between -1 and +1, with probabilities that are controlled through an input constructed from the outputs of other p-bits. The architecture of this probabilistic computer is similar to a stochastic neural network with the p-bit playing the role of a binary stochastic neuron, but with one key difference: there is no sequencer used to enforce an ordering of p-bit updates, as is typically required. Instead, we explore \textit{sequencerless} designs where all p-bits are allowed to flip autonomously and demonstrate that such designs can allow ultrafast operation unconstrained by available clock speeds without compromising the solution's fidelity. Based on experimental results from a hardware benchmark of the autonomous design and benchmarked device models, we project that a nanomagnetic implementation can scale to achieve petaflips per second with millions of neurons. A key contribution of this paper is the focus on a hardware metric −- flips per second −- as a problem and substrate-independent figure-of-merit for an emerging class of hardware annealers known as Ising Machines. Much like the shrinking feature sizes of transistors that have continually driven Moore's Law, we believe that flips per second can be continually improved in later technology generations of a wide class of probabilistic, domain specific hardware.Comment: 13 pages, 8 figures, 1 tabl

    2022 roadmap on neuromorphic computing and engineering

    Full text link
    Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018^{18} calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community
    • …
    corecore