553 research outputs found

    Adaptive Domotic System in Green Buildings

    Get PDF
    This paper presents an adaptive domotic system in green buildings. In our case, the data of sensor and devices were controlled in CIESOL center. The adaptive domotic system uses a Fuzzy Lattice Reasoning classifier for predicting building energy performance depending on the user condition. Training and testing of classifiers were carried out with temperature condition data acquired for 4 months (February, May, July and November) in the case building called CIESOL. The results show a hihg accuracy rates with a mean absolute error between 0% and 0.21%

    Neuro-fuzzy chip to handle complex tasks with analog performance

    Get PDF
    This paper presents a mixed-signal neuro-fuzzy controller chip which, in terms of power consumption, input–output delay, and precision, performs as a fully analog implementation. However, it has much larger complexity than its purely analog counterparts. This combination of performance and complexity is achieved through the use of a mixed-signal architecture consisting of a programmable analog core of reduced complexity, and a strategy, and the associated mixed-signal circuitry, to cover the whole input space through the dynamic programming of this core. Since errors and delays are proportional to the reduced number of fuzzy rules included in the analog core, they are much smaller than in the case where the whole rule set is implemented by analog circuitry. Also, the area and the power consumption of the new architecture are smaller than those of its purely analog counterparts simply because most rules are implemented through programming. The Paper presents a set of building blocks associated to this architecture, and gives results for an exemplary prototype. This prototype, called multiplexing fuzzy controller (MFCON), has been realized in a CMOS 0.7 um standard technology. It has two inputs, implements 64 rules, and features 500 ns of input to output delay with 16-mW of power consumption. Results from the chip in a control application with a dc motor are also provided

    Neuro-fuzzy chip to handle complex tasks with analog performance

    Get PDF
    This Paper presents a mixed-signal neuro-fuzzy controller chip which, in terms of power consumption, input-output delay and precision performs as a fully analog implementation. However, it has much larger complexity than its purely analog counterparts. This combination of performance and complexity is achieved through the use of a mixed-signal architecture consisting of a programmable analog core of reduced complexity, and a strategy, and the associated mixed-signal circuitry, to cover the whole input space through the dynamic programming of this core [1]. Since errors and delays are proportional to the reduced number of fuzzy rules included in the analog core, they are much smaller than in the case where the whole rule set is implemented by analog circuitry. Also, the area and the power consumption of the new architecture are smaller than those of its purely analog counterparts simply because most rules are implemented through programming. The Paper presents a set of building blocks associated to this architecture, and gives results for an exemplary prototype. This prototype, called MFCON, has been realized in a CMOS 0.7μm standard technology. It has two inputs, implements 64 rules and features 500ns of input to output delay with 16mW of power consumption. Results from the chip in a control application with a DC motor are also provided

    An efficient implementation of lattice-ladder multilayer perceptrons in field programmable gate arrays

    Get PDF
    The implementation efficiency of electronic systems is a combination of conflicting requirements, as increasing volumes of computations, accelerating the exchange of data, at the same time increasing energy consumption forcing the researchers not only to optimize the algorithm, but also to quickly implement in a specialized hardware. Therefore in this work, the problem of efficient and straightforward implementation of operating in a real-time electronic intelligent systems on field-programmable gate array (FPGA) is tackled. The object of research is specialized FPGA intellectual property (IP) cores that operate in a real-time. In the thesis the following main aspects of the research object are investigated: implementation criteria and techniques. The aim of the thesis is to optimize the FPGA implementation process of selected class dynamic artificial neural networks. In order to solve stated problem and reach the goal following main tasks of the thesis are formulated: rationalize the selection of a class of Lattice-Ladder Multi-Layer Perceptron (LLMLP) and its electronic intelligent system test-bed – a speaker dependent Lithuanian speech recognizer, to be created and investigated; develop dedicated technique for implementation of LLMLP class on FPGA that is based on specialized efficiency criteria for a circuitry synthesis; develop and experimentally affirm the efficiency of optimized FPGA IP cores used in Lithuanian speech recognizer. The dissertation contains: introduction, four chapters and general conclusions. The first chapter reveals the fundamental knowledge on computer-aideddesign, artificial neural networks and speech recognition implementation on FPGA. In the second chapter the efficiency criteria and technique of LLMLP IP cores implementation are proposed in order to make multi-objective optimization of throughput, LLMLP complexity and resource utilization. The data flow graphs are applied for optimization of LLMLP computations. The optimized neuron processing element is proposed. The IP cores for features extraction and comparison are developed for Lithuanian speech recognizer and analyzed in third chapter. The fourth chapter is devoted for experimental verification of developed numerous LLMLP IP cores. The experiments of isolated word recognition accuracy and speed for different speakers, signal to noise ratios, features extraction and accelerated comparison methods were performed. The main results of the thesis were published in 12 scientific publications: eight of them were printed in peer-reviewed scientific journals, four of them in a Thomson Reuters Web of Science database, four articles – in conference proceedings. The results were presented in 17 scientific conferences

    Artificial Neural Networks for Parametrisation of a Kinetic Monte Carlo Model of Surface Diffusion

    Get PDF
    Diffuusiota metallipinnalla voidaan simuloida atomistisella kineettisellä Monte Carlo -menetelmällä (KMC), jossa systeemin aikakehitystä mallinnetaan atomien perättäisillä hypyillä hilassa. Mallin parametrisoimiseksi on laskettava erilaisten mahdollisten hyppyjen energiavallit, mikä rajoittaa menetelmän käyttöä. Lupaava ratkaisu tähän ovat koneoppimista hyödyntävät menetelmät, kuten keinotekoiset neuroverkot. Neuroverkkoja voidaan kouluttaa joukolla tunnettuja energiavalleja, minkä jälkeen ne osaavat arvioida uusien hyppyjen energiavallit. Tässä tutkimuksessa parannetaan aiempaa neuroverkkoihin perustuvaa parametrisointimenetelmää sisällyttämällä malliin useampia atomeja hypyn ympärillä. Joukko hyppyjä valittiin, ja niiden energiavallit laskettiin nudged elastic band -menetelmällä. Näitä energiavalleja käytettiin neuroverkkojen kouluttamiseen. Lopuksi simuloitiin nanokokoisten pylväiden madaltumista KMC-menetelmällä käyttäen neuroverkoilla laskettuja energiavalleja. Simulaatioita verrattiin aiempaa parametrisointimenetelmää käyttäen saatuihin KMC-tuloksiin. Malliin sisällytetyt uudet atomit aiheuttivat energiavalleihin merkittäviä eroavaisuuksia, joita ei pystytä kuvaamaan aiemmalla mallilla. Lisäksi koulutetut neuroverkot tuottivat tarkkuudeltaan hyviä arvioita energiavalleista. KMC-tulokset olivat joissain tapauksissa realistisempia tai yhtä realistisia kuin aiemmat tulokset, mutta monissa tapauksissa huonompia. Tulosten laatu riippui vahvasti siitä, mitä energiavalleja neuroverkkojen koulutuksessa oli käytetty. Tulevaisuudessa koulutuksessa käytettävien energiavallien valintaa voidaan parantaa esimerkiksi aktiivisen oppimisen menetelmiä käyttäen.Surface diffusion in metals can be simulated with the atomistic kinetic Monte Carlo (KMC) method, where the evolution of a system is modeled by successive atomic jumps. The parametrisation of the method requires calculating the energy barriers of the different jumps that can occur in the system, which poses a limitation to its use. A promising solution to this are machine learning methods, such as artificial neural networks, which can be trained to predict barriers based on a set of pre-calculated ones. In this work, an existing neural network based parametrisation scheme is enhanced by expanding the atomic environment of the jump to include more atoms. A set of surface diffusion jumps was selected and their barriers were calculated with the nudged elastic band method. Artificial neural networks were then trained on the calculated barriers. Finally, KMC simulations of nanotip flattening were run using barriers which were predicted by the neural networks. The simulations were compared to the KMC results obtained with the existing scheme. The additional atoms in the jump environment caused significant changes to the barriers, which cannot be described by the existing model. The trained networks also showed a good prediction accuracy. However, the KMC results were in some cases more realistic or as realistic as the previous results, but often worse. The quality of the results also depended strongly on the selection of training barriers. We suggest that, for example, active learning methods can be used in the future to select the training data optimally

    Function approximation using back propagation algorithm in artificial neural networks

    Get PDF
    Inspired by biological neural networks, Artificial neural networks are massively parallel computing systems consisting of a large number of simple processors with many interconnections. They have input connections which are summed together to determine the strength of their output, which is the result of the sum being fed into an activation function. Based on architecture ANNs can be feed forward network or feedback networks. Most common family of feed-forward networks, called multilayer perceptron, neurons are organized into layers that have unidirectional connections between them. These connections are directed (from the input to the output layer) and have weights assigned to them. The principle of ANN is applied for approximating a function where they learn a function by looking at examples of this function. Here the internal weights in the ANN are slowly adjusted so as to produce the same output as in the examples. Performance is improved over time by iteratively updating the weights in the network. The hope is that when the ANN is shown a new set of input variables, it will give a correct output. To train a neural network to perform some task, we must adjust the weights of each unit in such a way that the error between the desired output and the actual output is reduced. This process requires that the neural network compute the error derivative of the weights (EW). In other words, it must calculate how the error changes as each weight is increased or decreased slightly. The back-propagation algorithm is the most widely used method for determining EW. We have started our program for a fixed structure network. It’s a 4 layer network with 1 input, 2 hidden and 1 output layers. No of nodes in input layer is 9 and output layer is 1. Hidden layer nodes are fixed at 4 and 3. The learning rate is taken as 0.07. We have written the program in MAT LAB and got the output of the network. The graph is plotted taking no of iteration and mean square error as parameter. The converging rate of error is very good. Then we moved to a network with all its parameter varying. We have written the program in VISUAL C++ with no. of hidden layer, no of nodes in each hidden layer, learning rate all varying. The converging plots for different structure by varying the variables are taken

    Book reports

    Get PDF

    Speech and neural network dynamics

    Get PDF
    corecore