5 research outputs found

    An Efficient v-minimum Absolute Deviation Distribution Regression Machine

    Get PDF

    Support Vector Number Reduction: Survey and Experimental Evaluations

    Full text link

    An efficient implementation of lattice-ladder multilayer perceptrons in field programmable gate arrays

    Get PDF
    The implementation efficiency of electronic systems is a combination of conflicting requirements, as increasing volumes of computations, accelerating the exchange of data, at the same time increasing energy consumption forcing the researchers not only to optimize the algorithm, but also to quickly implement in a specialized hardware. Therefore in this work, the problem of efficient and straightforward implementation of operating in a real-time electronic intelligent systems on field-programmable gate array (FPGA) is tackled. The object of research is specialized FPGA intellectual property (IP) cores that operate in a real-time. In the thesis the following main aspects of the research object are investigated: implementation criteria and techniques. The aim of the thesis is to optimize the FPGA implementation process of selected class dynamic artificial neural networks. In order to solve stated problem and reach the goal following main tasks of the thesis are formulated: rationalize the selection of a class of Lattice-Ladder Multi-Layer Perceptron (LLMLP) and its electronic intelligent system test-bed – a speaker dependent Lithuanian speech recognizer, to be created and investigated; develop dedicated technique for implementation of LLMLP class on FPGA that is based on specialized efficiency criteria for a circuitry synthesis; develop and experimentally affirm the efficiency of optimized FPGA IP cores used in Lithuanian speech recognizer. The dissertation contains: introduction, four chapters and general conclusions. The first chapter reveals the fundamental knowledge on computer-aideddesign, artificial neural networks and speech recognition implementation on FPGA. In the second chapter the efficiency criteria and technique of LLMLP IP cores implementation are proposed in order to make multi-objective optimization of throughput, LLMLP complexity and resource utilization. The data flow graphs are applied for optimization of LLMLP computations. The optimized neuron processing element is proposed. The IP cores for features extraction and comparison are developed for Lithuanian speech recognizer and analyzed in third chapter. The fourth chapter is devoted for experimental verification of developed numerous LLMLP IP cores. The experiments of isolated word recognition accuracy and speed for different speakers, signal to noise ratios, features extraction and accelerated comparison methods were performed. The main results of the thesis were published in 12 scientific publications: eight of them were printed in peer-reviewed scientific journals, four of them in a Thomson Reuters Web of Science database, four articles – in conference proceedings. The results were presented in 17 scientific conferences

    Reconfigurable Architectures for Hardware Acceleration of Machine Learning Classifiers

    Get PDF
    У овој дисертацији представљене су универзалне реконфигурабилне архитектуре грубог степена гранулације за хардверску имплементацију DT (decision trees), ANN (artificial neural networks) и SVM (support vector machines) предиктивних модела као и хомогених и хетерогених ансамбала. Коришћењем ових архитектура реализоване су две врсте DT модела, две врсте ANN модела, две врсте SVM модела и седам врста ансамбала на FPGA (field programmable gate arrays) чипу. Експерименти, засновани на скуповима из стандардне UCI базе скупова за машинско учење, показују да FPGA имплементација омогућава значајно убрзање (од 1 до 6 редова величине) просечног времена потребног за предикцију, у поређењу са софтверским решењима.U ovoj disertaciji predstavljene su univerzalne rekonfigurabilne arhitekture grubog stepena granulacije za hardversku implementaciju DT (decision trees), ANN (artificial neural networks) i SVM (support vector machines) prediktivnih modela kao i homogenih i heterogenih ansambala. Korišćenjem ovih arhitektura realizovane su dve vrste DT modela, dve vrste ANN modela, dve vrste SVM modela i sedam vrsta ansambala na FPGA (field programmable gate arrays) čipu. Eksperimenti, zasnovani na skupovima iz standardne UCI baze skupova za mašinsko učenje, pokazuju da FPGA implementacija omogućava značajno ubrzanje (od 1 do 6 redova veličine) prosečnog vremena potrebnog za predikciju, u poređenju sa softverskim rešenjima.This thesis proposes universal coarse-grained reconfigurable computing architectures for hardware implementation of decision trees (DTs), artificial neural networks (ANNs), support vector machines (SVMs), and homogeneous and heterogeneous ensemble classifiers (HHESs). Using these universal architectures, two versions of DTs, two versions of SVMs, two versions of ANNs, and seven versions of HHESs machine learning classifiers, have been implemented in field programmable gate arrays (FPGA). Experimental results, based on datasets of standard UCI machine learning repository database, show that FPGA implementation provides significant improvement (1–6 orders of magnitude) in the average instance classification time, in comparison with software implementations

    A support vector machine with integer parameters

    No full text
    We describe here a method for building a support vector machine (SVM) with integer parameters. Our method is based on a branch-and-bound procedure, derived from modern mixed integer quadratic programming solvers, and is useful for implementing the feed-forward phase of the SVM in fixed\u2013point arithmetic. This allows the implementation of the SVM algorithm on resource\u2013limited hardware like, for example, computing devices used for building sensor networks, where floating\u2013point units are rarely available. The experimental results on well\u2013known benchmarking data sets and a real\u2013world people-detection application show the effectiveness of our approach
    corecore