5,094 research outputs found

    Unsupervised Heart-rate Estimation in Wearables With Liquid States and A Probabilistic Readout

    Full text link
    Heart-rate estimation is a fundamental feature of modern wearable devices. In this paper we propose a machine intelligent approach for heart-rate estimation from electrocardiogram (ECG) data collected using wearable devices. The novelty of our approach lies in (1) encoding spatio-temporal properties of ECG signals directly into spike train and using this to excite recurrently connected spiking neurons in a Liquid State Machine computation model; (2) a novel learning algorithm; and (3) an intelligently designed unsupervised readout based on Fuzzy c-Means clustering of spike responses from a subset of neurons (Liquid states), selected using particle swarm optimization. Our approach differs from existing works by learning directly from ECG signals (allowing personalization), without requiring costly data annotations. Additionally, our approach can be easily implemented on state-of-the-art spiking-based neuromorphic systems, offering high accuracy, yet significantly low energy footprint, leading to an extended battery life of wearable devices. We validated our approach with CARLsim, a GPU accelerated spiking neural network simulator modeling Izhikevich spiking neurons with Spike Timing Dependent Plasticity (STDP) and homeostatic scaling. A range of subjects are considered from in-house clinical trials and public ECG databases. Results show high accuracy and low energy footprint in heart-rate estimation across subjects with and without cardiac irregularities, signifying the strong potential of this approach to be integrated in future wearable devices.Comment: 51 pages, 12 figures, 6 tables, 95 references. Under submission at Elsevier Neural Network

    A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones

    Full text link
    Fully-autonomous miniaturized robots (e.g., drones), with artificial intelligence (AI) based visual navigation capabilities are extremely challenging drivers of Internet-of-Things edge intelligence capabilities. Visual navigation based on AI approaches, such as deep neural networks (DNNs) are becoming pervasive for standard-size drones, but are considered out of reach for nanodrones with size of a few cm2{}^\mathrm{2}. In this work, we present the first (to the best of our knowledge) demonstration of a navigation engine for autonomous nano-drones capable of closed-loop end-to-end DNN-based visual navigation. To achieve this goal we developed a complete methodology for parallel execution of complex DNNs directly on-bard of resource-constrained milliwatt-scale nodes. Our system is based on GAP8, a novel parallel ultra-low-power computing platform, and a 27 g commercial, open-source CrazyFlie 2.0 nano-quadrotor. As part of our general methodology we discuss the software mapping techniques that enable the state-of-the-art deep convolutional neural network presented in [1] to be fully executed on-board within a strict 6 fps real-time constraint with no compromise in terms of flight results, while all processing is done with only 64 mW on average. Our navigation engine is flexible and can be used to span a wide performance range: at its peak performance corner it achieves 18 fps while still consuming on average just 3.5% of the power envelope of the deployed nano-aircraft.Comment: 15 pages, 13 figures, 5 tables, 2 listings, accepted for publication in the IEEE Internet of Things Journal (IEEE IOTJ

    MISAT: Designing a Series of Powerful Small Satellites Based upon Micro Systems Technology

    Get PDF
    MISAT is a research and development cluster which will create a small satellite platform based on Micro Systems Technology (MST) aiming at innovative space as well as terrestrial applications. MISAT is part of the Dutch MicroNed program which has established a microsystems infrastructure to fully exploit the MST knowledge chain involving public and industrial partners alike. The cluster covers MST-related developments for the spacecraft bus and payload, as well as the satellite architecture. Particular emphasis is given to distributed systems in space to fully exploit the potential of miniaturization for future mission concepts. Examples of current developments are wireless sensor and actuator networks with plug and play characteristics, autonomous digital Sun sensors, re-configurable radio front ends with minimum power consumption, or micro-machined electrostatic accelerometer and gradiometer system for scientific research in fundamental physics as well as geophysics. As a result of MISAT, a first nano-satellite will be launched in 2007 to demonstrate the next generation of Sun sensors, power subsystems and satellite architecture technology. Rapid access to in-orbit technology demonstration and verification will be provided by a series of small satellites. This will include a formation flying mission, which will increasingly rely on MISAT technology to improve functionality and reduce size, mass and power for advanced technology demonstration and novel scientific applications.

    Design of a CMOS-Memristive Mixed-Signal Neuromorphic System with Energy and Area Efficiency in System Level Applications

    Get PDF
    The von Neumann architecture has been the backbone of modern computers for several years. This computational framework is popular because it defines an easy, simple and cheap design for the processing unit and memory. Unfortunately, this architecture faces a huge bottleneck going forward since complexity in computations now demands increased parallelism and this architecture is not efficient at parallel processing. Moreover, the post-Moore\u27s law era brings a constant demand for energy-efficient computing with fewer resources and less area. Hence, researchers are interested in establishing alternatives to the von Neumann architecture and neuromorphic computing is one of the few aspiring computing architectures that contributes to this research effectively. Initially, neuromorphic computing attracted attention because of the parallelism found in the bio-inspired networks and they were interested in leveraging this advantage on a single chip. Moreover, the need for speed in real time performance also escalated the popularity of neuromorphic computing and different research groups started working on hardware implementations of neural networks. Also, neuroscience is consistently building a better understanding of biological networks that provides opportunities for bridging the gap between biological neuronal activities and artificial neural networks. As a consequence, the idea behind neuromorphic computing has continued to gain in popularity. In this research, a memristive neuromorphic system for improved power and area efficiency has been presented. This particular implementation introduces a mixed-signal platform to implement neural networks in a synchronous way. In addition to mixed-signal design, a nano-scale memristive device has been introduced that provides power and area efficiency for the overall system. The system design also includes synchronous digital long term plasticity (DLTP), an online learning methodology that helps train the neural networks during the operation phase, improving the efficiency in learning when considering power consumption and area overhead. This research also proposes a stochastic neuron design with a sigmoidal firing rate. The design introduces variability in the membrane capacitance to reach different membrane potential leading to a variable stochastic firing rate

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Multi Detector Fusion of Dynamic TOA Estimation using Kalman Filter

    Full text link
    In this paper, we propose fusion of dynamic TOA (time of arrival) from multiple non-coherent detectors like energy detectors operating at sub-Nyquist rate through Kalman filtering. We also show that by using multiple of these energy detectors, we can achieve the performance of a digital matched filter implementation in the AWGN (additive white Gaussian noise) setting. We derive analytical expression for number of energy detectors needed to achieve the matched filter performance. We demonstrate in simulation the validity of our analytical approach. Results indicate that number of energy detectors needed will be high at low SNRs and converge to a constant number as the SNR increases. We also study the performance of the strategy proposed using IEEE 802.15.4a CM1 channel model and show in simulation that two sub-Nyquist detectors are sufficient to match the performance of digital matched filter
    • …
    corecore