28 research outputs found

    Performance evaluation and implementations of MFCC, SVM and MLP algorithms in the FPGA board

    Get PDF
    One of the most difficult speech recognition tasks is accurate recognition of human-to-human communication. Advances in deep learning over the last few years have produced major speech improvements in recognition on the representative Switch-board conversational corpus. Word error rates that just a few years ago were 14% have dropped to 8.0%, then 6.6% and most recently 5.8%, and are now believed to be within striking range of human performance. This raises two issues - what is human performance, and how far down can we still drive speech recognition error rates? The main objective of this article is the development of a comparative study of the performance of Automatic Speech Recognition (ASR) algorithms using a database made up of a set of signals created by female and male speakers of different ages. We will also develop techniques for the Software and Hardware implementation of these algorithms and test them in an embedded electronic card based on a reconfigurable circuit (Field Programmable Gate Array FPGA). We will present an analysis of the results of classifications for the best Support Vector Machine architectures (SVM) and Artificial Neural Networks of Multi-Layer Perceptron (MLP). Following our analysis, we created NIOSII processors and we tested their operations as well as their characteristics. The characteristics of each processor are specified in this article (cost, size, speed, power consumption and complexity). At the end of this work, we physically implemented the architecture of the Mel Frequency Cepstral Coefficients (MFCC) extraction algorithm as well as the classification algorithm that provided the best results

    Engineering Principles Of Photosystems And Their Practical Application As Long-Lived Charge Separation In Maquettes

    Get PDF
    Light-activated electron transfer reactions between cofactors embedded in proteins serve as the central mechanism underlying numerous biological processes essential to the survival and prosperity of most organisms on this planet. These processes range from navigation, to DNA repair, to metabolism, and to solar energy conversion. The proper functioning of these processes relies on the creation of a charge-separated states lasting for a necessary length of time, from tens of nanoseconds to hundreds of milliseconds, by the arrays of cofactors in photosystems. In spite of decades of experiments and theoretical frameworks providing detailed and extensive description of the behavior of the photosystems, coherent and systematic understanding is lacking regarding the underlying structural and chemical engineering principles that govern the performance of charge-separation in photosystems, evaluated by the fraction of the input energy made available by the photosystem for its intended function. This thesis aims to establish a set of engineering principles of natural and man-made photosystems based on the fundamental theories of electron transfer and the biophysical and biochemical constraints imposed by the protein environment, and then to apply these engineering principles to design and construct man-made photosystems that can excel in charge-separation while incurring minimal cost in their construction. Using the fundamental theories of electron transfer, this thesis develops an efficient computational algorithm that returns a set of guidelines for engineering optimal light-driven charge-separation in cofactor-based photosystems. This thesis then examines the validity of these guidelines in natural photosystems, discovering significant editing and updating of these guidelines imposed by the biological environment in which photosystems are engineered by nature. This thesis then organizes the two layers of engineering principles into a concise set of rules and demonstrates that they can be applied as guidelines to the practical construction of highly efficient man-made photosystems. To test these engineering guidelines in practice, the first ever donor-pigment-acceptor triad is constructed in a maquette and successfully separates charges stably for \u3e300ms, establishing the world record in a triad. Finally, this work looks ahead to the engineering of the prescribed optimal tetrads in maquettes, identifying what’s in place and what challenges yet remain

    2022 roadmap on neuromorphic computing and engineering

    Full text link
    Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018^{18} calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community

    Neuromorphic computing using non-volatile memory

    Get PDF
    Dense crossbar arrays of non-volatile memory (NVM) devices represent one possible path for implementing massively-parallel and highly energy-efficient neuromorphic computing systems. We first review recent advances in the application of NVM devices to three computing paradigms: spiking neural networks (SNNs), deep neural networks (DNNs), and ‘Memcomputing’. In SNNs, NVM synaptic connections are updated by a local learning rule such as spike-timing-dependent-plasticity, a computational approach directly inspired by biology. For DNNs, NVM arrays can represent matrices of synaptic weights, implementing the matrix–vector multiplication needed for algorithms such as backpropagation in an analog yet massively-parallel fashion. This approach could provide significant improvements in power and speed compared to GPU-based DNN training, for applications of commercial significance. We then survey recent research in which different types of NVM devices – including phase change memory, conductive-bridging RAM, filamentary and non-filamentary RRAM, and other NVMs – have been proposed, either as a synapse or as a neuron, for use within a neuromorphic computing application. The relevant virtues and limitations of these devices are assessed, in terms of properties such as conductance dynamic range, (non)linearity and (a)symmetry of conductance response, retention, endurance, required switching power, and device variability.11Yscopu
    corecore