791 research outputs found

    Reliability of Extreme Learning Machines

    Get PDF
    Neumann K. Reliability of Extreme Learning Machines. Bielefeld: Bielefeld University Library; 2014.The reliable application of machine learning methods becomes increasingly important in challenging engineering domains. In particular, the application of extreme learning machines (ELM) seems promising because of their apparent simplicity and the capability of very efficient processing of large and high-dimensional data sets. However, the ELM paradigm is based on the concept of single hidden-layer neural networks with randomly initialized and fixed input weights and is thus inherently unreliable. This black-box character usually repels engineers from application in potentially safety critical tasks. The problem becomes even more severe since, in principle, only sparse and noisy data sets can be provided in such domains. The goal of this thesis is therefore to equip the ELM approach with the abilities to perform in a reliable manner. This goal is approached in three aspects by enhancing the robustness of ELMs to initializations, make ELMs able to handle slow changes in the environment (i.e. input drifts), and allow the incorporation of continuous constraints derived from prior knowledge. It is shown in several diverse scenarios that the novel ELM approach proposed in this thesis ensures a safe and reliable application while simultaneously sustaining the full modeling power of data-driven methods

    On the choice of metric in gradient-based theories of brain function

    Full text link
    The idea that the brain functions so as to minimize certain costs pervades theoretical neuroscience. Since a cost function by itself does not predict how the brain finds its minima, additional assumptions about the optimization method need to be made to predict the dynamics of physiological quantities. In this context, steepest descent (also called gradient descent) is often suggested as an algorithmic principle of optimization potentially implemented by the brain. In practice, researchers often consider the vector of partial derivatives as the gradient. However, the definition of the gradient and the notion of a steepest direction depend on the choice of a metric. Since the choice of the metric involves a large number of degrees of freedom, the predictive power of models that are based on gradient descent must be called into question, unless there are strong constraints on the choice of the metric. Here we provide a didactic review of the mathematics of gradient descent, illustrate common pitfalls of using gradient descent as a principle of brain function with examples from the literature and propose ways forward to constrain the metric.Comment: Revised version; 14 pages, 4 figure

    Neuromorphic Computing with Resistive Switching Devices.

    Full text link
    Resistive switches, commonly referred to as resistive memory (RRAM) devices and modeled as memristors, are an emerging nanoscale technology that can revolutionize data storage and computing approaches. Enabled by the advancement of nanoscale semiconductor fabrication and detailed understanding of the physical and chemical processes occurring at the atomic scale, resistive switches offer high speed, low-power, and extremely dense nonvolatile data storage. Further, the analog capabilities of resistive switching devices enables neuromorphic computing approaches which can achieve massively parallel computation with a power and area budget that is orders of magnitude lower than today’s conventional, digital approaches. This dissertation presents the investigation of tungsten oxide based resistive switching devices for use in neuromorphic computing applications. Device structure, fabrication, and integration are described and physical models are developed to describe the behavior of the devices. These models are used to develop array-scale simulations in support of neuromorphic computing approaches. Several signal processing algorithms are adapted for acceleration using arrays of resistive switches. Both simulation and experimental results are reported. Finally, guiding principles and proposals for future work are discussed.PhDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/116743/1/sheridp_1.pd

    Dimensions of Timescales in Neuromorphic Computing Systems

    Get PDF
    This article is a public deliverable of the EU project "Memory technologies with multi-scale time constants for neuromorphic architectures" (MeMScales, https://memscales.eu, Call ICT-06-2019 Unconventional Nanoelectronics, project number 871371). This arXiv version is a verbatim copy of the deliverable report, with administrative information stripped. It collects a wide and varied assortment of phenomena, models, research themes and algorithmic techniques that are connected with timescale phenomena in the fields of computational neuroscience, mathematics, machine learning and computer science, with a bias toward aspects that are relevant for neuromorphic engineering. It turns out that this theme is very rich indeed and spreads out in many directions which defy a unified treatment. We collected several dozens of sub-themes, each of which has been investigated in specialized settings (in the neurosciences, mathematics, computer science and machine learning) and has been documented in its own body of literature. The more we dived into this diversity, the more it became clear that our first effort to compose a survey must remain sketchy and partial. We conclude with a list of insights distilled from this survey which give general guidelines for the design of future neuromorphic systems

    2022 roadmap on neuromorphic computing and engineering

    Full text link
    Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018^{18} calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community

    Shared control for natural motion and safety in hands-on robotic surgery

    Get PDF
    Hands-on robotic surgery is where the surgeon controls the tool's motion by applying forces and torques to the robot holding the tool, allowing the robot-environment interaction to be felt though the tool itself. To further improve results, shared control strategies are used to combine the strengths of the surgeon with those of the robot. One such strategy is active constraints, which prevent motion into regions deemed unsafe or unnecessary. While research in active constraints on rigid anatomy has been well-established, limited work on dynamic active constraints (DACs) for deformable soft tissue has been performed, particularly on strategies which handle multiple sensing modalities. In addition, attaching the tool to the robot imposes the end effector dynamics onto the surgeon, reducing dexterity and increasing fatigue. Current control policies on these systems only compensate for gravity, ignoring other dynamic effects. This thesis presents several research contributions to shared control in hands-on robotic surgery, which create a more natural motion for the surgeon and expand the usage of DACs to point clouds. A novel null-space based optimization technique has been developed which minimizes the end effector friction, mass, and inertia of redundant robots, creating a more natural motion, one which is closer to the feeling of the tool unattached to the robot. By operating in the null-space, the surgeon is left in full control of the procedure. A novel DACs approach has also been developed, which operates on point clouds. This allows its application to various sensing technologies, such as 3D cameras or CT scans and, therefore, various surgeries. Experimental validation in point-to-point motion trials and a virtual reality ultrasound scenario demonstrate a reduction in work when maneuvering the tool and improvements in accuracy and speed when performing virtual ultrasound scans. Overall, the results suggest that these techniques could increase the ease of use for the surgeon and improve patient safety.Open Acces
    • …
    corecore