4,893 research outputs found

    Memristors for the Curious Outsiders

    Full text link
    We present both an overview and a perspective of recent experimental advances and proposed new approaches to performing computation using memristors. A memristor is a 2-terminal passive component with a dynamic resistance depending on an internal parameter. We provide an brief historical introduction, as well as an overview over the physical mechanism that lead to memristive behavior. This review is meant to guide nonpractitioners in the field of memristive circuits and their connection to machine learning and neural computation.Comment: Perpective paper for MDPI Technologies; 43 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    ELM ZA KLASIFIKACIJU TUMORA MOZGA KOD 3D MR SNIMAKA

    Get PDF
    Extreme Learning machine (ELM) a widely adopted algorithm in machine learning field is proposed for the use of pattern classification model using 3D MRI images for identifying tissue abnormalities in brain histology. The four class classification includes gray matter, white matter, cerebrospinal-fluid and tumor. The 3D MRI assessed by a pathologist indicates the ROI and the images are normalized. Texture features for each of the sub-regions is based on the Run-length Matrix, Co-occurence Matrix, Intensity, Euclidean distance, Gradient vector and neighbourhood statistics. Genetic Algorithm is custom designed to extract and sub-select a decisive optimal bank of features which are then used to model the ELM classifier and best selection of ELM algorithm parameters to handle sparse image data. The algorithm is explored using different activation function and the effect of number of neurons in the hidden layer by using different ratios of the number of features in the training and test data. The ELM classification outperformed in terms of accuracy, sensitivity and specificity as 93.20 %, 91.6 %, and 97.98% for discrimination of brain and pathological tumor tissue classification against state-of-the-art feature extraction methods and classifiers in the literature for publicly available SPL dataset.ELM, široko prihvaćen algoritam strojnog učenja se predlaže za korištenje u uzorkovanju pomoću klasifikacijskog modela 3D MRI slika za identifikaciju abnormalnosti tkiva u histologiji mozga. Četiri klase obuhvaćaju sive, bijele tvari, cerebrospinalne tekućine-i tumore. 3D MRI koji ocjenjuje patolog, ukazuje na ROI, a slike su normalizirane. Značajke tekstura za svaku od podregija se temelje na Run-length matrici, ponovnom pojavljivanju matrice, intenzitet, euklidska udaljenost, gradijent vektora i statistike susjedstva. Genetski algoritam je obično dizajniran za izdvajanje i sub-optimalan odabir odlučujući o značajkama koje se onda koriste za model ELM klasifikatora i najbolji izbor ELM parametra algoritama za obradu rijetkih slikovnih podataka. Algoritam se istražuje koristeći različite aktivacijske funkcije i utjecaj broja neurona u skrivenom sloju pomoću različitih omjera broja značajki kod trening i test podataka. ELM klasifikacija je nadmašila u smislu točnosti, osjetljivosti i specifičnosti, kao 93,20%, 91,6% i 97,98% za diskriminaciju mozga i patološki kod tumora i sistematizacije metode za prikupljanje podataka i klasifikatore u literaturi za javno dostupne SPL skup podataka
    corecore