27 research outputs found

    A study on different linear and non-linear filtering techniques of speech and speech recognition

    Get PDF
    In any signal noise is an undesired quantity, however most of thetime every signal get mixed with noise at different levels of theirprocessing and application, due to which the information containedby the signal gets distorted and makes the whole signal redundant.A speech signal is very prominent with acoustical noises like bubblenoise, car noise, street noise etc. So for removing the noises researchershave developed various techniques which are called filtering. Basicallyall the filtering techniques are not suitable for every application,hence based on the type of application some techniques are betterthan the others. Broadly, the filtering techniques can be classifiedinto two categories i.e. linear filtering and non-linear filtering.In this paper a study is presented on some of the filtering techniqueswhich are based on linear and nonlinear approaches. These techniquesincludes different adaptive filtering based on algorithm like LMS,NLMS and RLS etc., Kalman filter, ARMA and NARMA time series applicationfor filtering, neural networks combine with fuzzy i.e. ANFIS. Thispaper also includes the application of various features i.e. MFCC,LPC, PLP and gamma for filtering and recognition

    Machine learning techniques to forecast non-linear trends in smart environments

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Biologically inspired evolutionary temporal neural circuits

    Get PDF
    Biological neural networks have always motivated creation of new artificial neural networks, and in this case a new autonomous temporal neural network system. Among the more challenging problems of temporal neural networks are the design and incorporation of short and long-term memories as well as the choice of network topology and training mechanism. In general, delayed copies of network signals can form short-term memory (STM), providing a limited temporal history of events similar to FIR filters, whereas the synaptic connection strengths as well as delayed feedback loops (ER circuits) can constitute longer-term memories (LTM). This dissertation introduces a new general evolutionary temporal neural network framework (GETnet) through automatic design of arbitrary neural networks with STM and LTM. GETnet is a step towards realization of general intelligent systems that need minimum or no human intervention and can be applied to a broad range of problems. GETnet utilizes nonlinear moving average/autoregressive nodes and sub-circuits that are trained by enhanced gradient descent and evolutionary search in terms of architecture, synaptic delay, and synaptic weight spaces. The mixture of Lamarckian and Darwinian evolutionary mechanisms facilitates the Baldwin effect and speeds up the hybrid training. The ability to evolve arbitrary adaptive time-delay connections enables GETnet to find novel answers to many classification and system identification tasks expressed in the general form of desired multidimensional input and output signals. Simulations using Mackey-Glass chaotic time series and fingerprint perspiration-induced temporal variations are given to demonstrate the above stated capabilities of GETnet

    Reservoir Computing: computation with dynamical systems

    Get PDF
    In het onderzoeksgebied Machine Learning worden systemen onderzocht die kunnen leren op basis van voorbeelden. Binnen dit onderzoeksgebied zijn de recurrente neurale netwerken een belangrijke deelgroep. Deze netwerken zijn abstracte modellen van de werking van delen van de hersenen. Zij zijn in staat om zeer complexe temporele problemen op te lossen maar zijn over het algemeen zeer moeilijk om te trainen. Recentelijk zijn een aantal gelijkaardige methodes voorgesteld die dit trainingsprobleem elimineren. Deze methodes worden aangeduid met de naam Reservoir Computing. Reservoir Computing combineert de indrukwekkende rekenkracht van recurrente neurale netwerken met een eenvoudige trainingsmethode. Bovendien blijkt dat deze trainingsmethoden niet beperkt zijn tot neurale netwerken, maar kunnen toegepast worden op generieke dynamische systemen. Waarom deze systemen goed werken en welke eigenschappen bepalend zijn voor de prestatie is evenwel nog niet duidelijk. Voor dit proefschrift is onderzoek gedaan naar de dynamische eigenschappen van generieke Reservoir Computing systemen. Zo is experimenteel aangetoond dat de idee van Reservoir Computing ook toepasbaar is op niet-neurale netwerken van dynamische knopen. Verder is een maat voorgesteld die gebruikt kan worden om het dynamisch regime van een reservoir te meten. Tenslotte is een adaptatieregel geïntroduceerd die voor een breed scala reservoirtypes de dynamica van het reservoir kan afregelen tot het gewenste dynamisch regime. De technieken beschreven in dit proefschrift zijn gedemonstreerd op verschillende academische en ingenieurstoepassingen

    単層カーボンナノチューブ/ポルフィリン-ポリ酸ランダムネットワークを用いたマテリアルリザバー演算素子 —次世代機械知能への新規アプローチ

    Get PDF
    In a layman’s term, computation is defined as the execution of a given instruction through a programmable algorithm. History has it that starting from the simplest calculator to the sophisticated von Neumann machine, the above definition has been followed without a flaw. Logical operations for which a human takes a minute long to solve, is a matter of fraction of seconds for these gadgets. But contrastingly, when it comes to critical and analytical thinking that requires learning through observation like the human brain, these powerful machines falter and lag behind. Thus, inspired from the brain’s neural circuit, software models of neural networks (NN) integrated with high-speed supercomputers were developed as an alternative tool to implement machine intelligent tasks of function optimization, pattern, and voice recognition. But as device downscaling and transistor performance approaches the constant regime of Moore’s law due to high CMOS fabrication cost and large tunneling energy loss, training these algorithms over multiple hidden layers is turning out to be a grave concern for future applications. As a result, the interplay between faster performance and low computational power requirement for complex tasks deems highly disproportional. Therefore, alternative in terms of both NN models and conventional Neumann architecture needs to be addressed in today’s age for next-generation machine intelligence systems. Fortunately, through extensive research and studies, unconventional computing using a reservoir based neural network platform, called in-materio reservoir computing (RC) has come to the rescue. In-maerio RC uses physical, biological, chemical, cellular automata and other inanimate dynamical systems as a source of non-linear high dimensional spatio-temporal information processing unit to construct a specific target task. RC not only has a three-layer simplified neural architectural layer, but also imposes a cheap, fast, and simplified optimization of only the readout weights with machine intelligent regression algorithm to construct the supervised objective target via a weighted linear combination of the readouts. Thus, utilizing this idea, herein in this work we report such an in-materio RC with a dynamical random network of single walled carbon nanotube/porphyrin-polyoxometalate (SWNT/Por-POM) device. We begin with Chapter 1, which deals with the introduction covering the literature of ANN evolution and the shortcomings of von Neumann architecture and training models of these ANN, which leads us to adopt the in-materio RC architecture. We design the problem statement focused on extending the theoretical RC model of previously suggested SWNT/POM network to an experimental one and present the objective of fabricating a random network based on nanomaterials as they closely resemble the network structure of the brain. Finally, we conclude by stating the scope of this research work aiming towards validating the non-linear high dimensional reservoir property SWNT/Por-POM holds for it to explicitly demonstrate the RC benchmark tasks of optimization and classification. Chapter 2 describes the methodology including the chemical repository required for the facile synthesis of the material. The synthesis part is divided broadly into SWNT purification and then its dispersion with Por-POM to form the desired complex. It is then followed up with the microelectrode array fabrication and the consequent wet-transfer thin film deposition to give the ultimate reservoir architecture of input-output control read pads with SWNT/Por-POM reservoir. Finally we give a briefing of AFM, UV-Vis spectroscopy, FE-SEM characterization techniques of SWNT/Por-POM complex along with the electrical set-up interfaced with software algorithm to demonstrate the RC approach of in-materio machine intelligence. In Chapter 3, we study the current dynamics as a function of voltage and time and validate the non-linear information processing ability intrinsic to the device. The study reveals that the negative differential resistance (NDR) arising from redox nature of Por-POM results in oscillating random noise outputs giving rise to 1/f brain-like spatio-temporal information. We compute the memory capacity (MC) and prove that the device exhibits echo state property of fading memory, but remembers very little of the past information. The low MC and high non-linearity allowed us to choose mostly non-linear tasks of waveform generation, Boolean logic optimization and one-hot vector binary object classification as the RC benchmark. The Chapter 4 relates to the waveform generation task. Utilizing the high dimensional voltage readouts of varying amplitude, phase and higher harmonic frequencies, relative to input sine wave, a regression optimization was performed towards constructing cosine, triangular, square and sawtooth waves resulting in a high accuracy of around 95%. The task complexity of function optimization was further enhanced in Chapter 5 where two inputs were used to construct Boolean logic functions of OR, AND, XOR, NOR, NAND and XNOR. Similar to the waveform, accuracy over 95% could be achieved due to the presence of NDR nonlinearity. Furthermore, the device was also tested for classification problem in Chapter 6. Here we showed an off-line binary classification of four object toys; hedgehog, dog, block and bus, using the grasped tactile information of these objects as inputs obtained from the Toyota Human Support Robot. A one-ridge regression analysis to fit the hot vector supervised target was used to optimize the output weights for predicting the correct outcome. All the objects were successfully classified owing to the 1/f information processing factor. Lastly, we conclude the section in Chapter 7 with the future scope of extending the idea to fabricate a 3-D model of the same material as it opens up opportunity for higher memory capacity fruitful for future benchmark tasks of time-series prediction. Overall, our research marks a step stone in utilizing SWNT/Por-POM as the in-materio RC for the very first time thereby making it a desirable candidate for next-generation machine intelligence.九州工業大学博士学位論文 学位記番号:生工博甲第425号 学位授与年月日:令和3年12月27日1 Introduction and Literature review|2 Methodology|3 Reservoir dynamics emerging from an incidental structure of single-walled carbon nanotube/porphyrin-polyoxometalate complex|4 Fourier transform waveforms via in-materio reservoir computing from single-walled carbon nanotube/porphyrin-polyoxometalate complex|5 Room temperature demonstration of in-materio reservoir computing for optimizing Boolean function with single-walled carbon nanotube/porphyrin-polyoxometalate composite|6 Binary object classification with tactile sensory input information of via single-walled carbon nanotube/porphyrin-polyoxometalate network as in-materio reservoir computing|7 Future scope and Conclusion九州工業大学令和3年

    Scaling up integrated photonic reservoirs towards low-power high-bandwidth computing

    No full text

    Time series prediction and channel equalizer using artificial neural networks with VLSI implementation

    Get PDF
    The architecture and training procedure of a novel recurrent neural network (RNN), referred to as the multifeedbacklayer neural network (MFLNN), is described in this paper. The main difference of the proposed network compared to the available RNNs is that the temporal relations are provided by means of neurons arranged in three feedback layers, not by simple feedback elements, in order to enrich the representation capabilities of the recurrent networks. The feedback layers provide local and global recurrences via nonlinear processing elements. In these feedback layers, weighted sums of the delayed outputs of the hidden and of the output layers are passed through certain activation functions and applied to the feedforward neurons via adjustable weights. Both online and offline training procedures based on the backpropagation through time (BPTT) algorithm are developed. The adjoint model of the MFLNN is built to compute the derivatives with respect to the MFLNN weights which are then used in the training procedures. The Levenberg–Marquardt (LM) method with a trust region approach is used to update the MFLNN weights. The performance of the MFLNN is demonstrated by applying to several illustrative temporal problems including chaotic time series prediction and nonlinear dynamic system identification, and it performed better than several networks available in the literature

    Architectures and Algorithms for Intrinsic Computation with Memristive Devices

    Get PDF
    Neuromorphic engineering is the research field dedicated to the study and design of brain-inspired hardware and software tools. Recent advances in emerging nanoelectronics promote the implementation of synaptic connections based on memristive devices. Their non-volatile modifiable conductance was shown to exhibit the synaptic properties often used in connecting and training neural layers. With their nanoscale size and non-volatile memory property, they promise a next step in designing more area and energy efficient neuromorphic hardware. My research deals with the challenges of harnessing memristive device properties that go beyond the behaviors utilized for synaptic weight storage. Based on devices that exhibit non-linear state changes and volatility, I present novel architectures and algorithms that can harness such features for computation. The crossbar architecture is a dense array of memristive devices placed in-between horizontal and vertical nanowires. The regularity of this structure does not inherently provide the means for nonlinear computation of applied input signals. Introducing a modulation scheme that relies on nonlinear memristive device properties, heterogeneous state patterns of applied spatiotemporal input data can be created within the crossbar. In this setup, the untrained and dynamically changing states of the memristive devices offer a useful platform for information processing. Based on the MNIST data set I\u27ll demonstrate how the temporal aspect of memristive state volatility can be utilized to reduce system size and training complexity for high dimensional input data. With 3 times less neurons and 15 times less synapses to train as compared to other memristor-based implementations, I achieve comparable classification rates of up to 93%. Exploiting dynamic state changes rather than precisely tuned stable states, this approach can tolerate device variation up to 6 times higher than reported levels. Random assemblies of memristive networks are analyzed as a substrate for intrinsic computation in connection with reservoir computing; a computational framework that harnesses observations of inherent dynamics within complex networks. Architectural and device level considerations lead to new levels of task complexity, which random memristive networks are now able to solve. A hierarchical design composed of independent random networks benefits from a diverse set of topologies and achieves prediction errors (NRMSE) on the time-series prediction task NARMA-10 as low as 0.15 as compared to 0.35 for an echo state network. Physically plausible network modeling is performed to investigate the relationship between network dynamics and energy consumption. Generally, increased network activity comes at the cost of exponentially increasing energy consumption due to nonlinear voltage-current characteristics of memristive devices. A trade-off, that allows linear scaling of energy consumption, is provided by the hierarchical approach. Rather than designing individual memristive networks with high switching activity, a collection of less dynamic, but independent networks can provide more diverse network activity per unit of energy. My research extends the possibilities of including emerging nanoelectronics into neuromorphic hardware. It establishes memristive devices beyond storage and motivates future research to further embrace memristive device properties that can be linked to different synaptic functions. Pursuing to exploit the functional diversity of memristive devices will lead to novel architectures and algorithms that study rather than dictate the behavior of such devices, with the benefit of creating robust and efficient neuromorphic hardware
    corecore