4 research outputs found
Advancements in Unsupervised Learning: Mode-Assisted Quantum Restricted Boltzmann Machines Leveraging Neuromorphic Computing on the Dynex Platform
The integration of neuromorphic computing into the Dynex platform signifies a transformative step in computational technology, particularly in the realms of machine learning and optimization. This advanced platform leverages the unique attributes of neuromorphic dynamics, utilizing neuromorphic annealing - a technique divergent from conventional computing methods - to adeptly address intricate problems in discrete optimization, sampling, and machine learning. Our research concentrates on enhancing the training process of Restricted Boltzmann Machines (RBMs), a category of generative models traditionally challenged by the intricacy of computing their gradient. Our proposed methodology, termed “quantum mode training”, blends standard gradient updates with an off-gradient direction derived from RBM ground state samples. This approach significantly improves the training efficacy of RBMs, outperforming traditional gradient methods in terms of speed, stability, and minimized converged relative entropy (KL divergence). This study not only highlights the capabilities of the Dynex platform in progressing unsupervised learning techniques but also contributes substantially to the broader comprehension and utilization of neuromorphic computing in complex computational tasks
Automatic speech feature extraction using a convolutional restricted boltzmann machine
A dissertation submitted to the Faculty of Science, University of
the Witwatersrand, in fulfillment of the requirements for the degree
of Master of Science
2017Restricted Boltzmann Machines (RBMs) are a statistical learning concept that can
be interpreted as Arti cial Neural Networks. They are capable of learning, in an
unsupervised fashion, a set of features with which to describe a data set. Connected
in series RBMs form a model called a Deep Belief Network (DBN), learning abstract
feature combinations from lower layers. Convolutional RBMs (CRBMs) are a variation
on the RBM architecture in which the learned features are kernels that are convolved
across spatial portions of the input data to generate feature maps identifying if a feature
is detected in a portion of the input data. Features extracted from speech audio data
by a trained CRBM have recently been shown to compete with the state of the art
for a number of speaker identi cation tasks. This project implements a similar CRBM
architecture in order to verify previous work, as well as gain insight into Digital Signal
Processing (DSP), Generative Graphical Models, unsupervised pre-training of Arti cial
Neural Networks, and Machine Learning classi cation tasks. The CRBM architecture
is trained on the TIMIT speech corpus and the learned features veri ed by using them
to train a linear classi er on tasks such as speaker genetic sex classi cation and speaker
identi cation. The implementation is quantitatively proven to successfully learn and
extract a useful feature representation for the given classi cation tasksMT 201