2,309 research outputs found
Quantum Machine Learning For Classical Data
In this dissertation, we study the intersection of quantum computing and supervised machine learning algorithms, which means that we investigate quantum algorithms for supervised machine learning that operate on classical data. This area of re- search falls under the umbrella of quantum machine learning, a research area of computer science which has recently received wide attention. In particular, we in- vestigate to what extent quantum computers can be used to accelerate supervised machine learning algorithms. The aim of this is to develop a clear understanding of the promises and limitations of the current state-of-the-art of quantum algorithms for supervised machine learning, but also to define directions for future research in this exciting field. We start by looking at supervised quantum machine learning (QML) algorithms through the lens of statistical learning theory. In this frame- work, we derive novel bounds on the computational complexities of a large set of supervised QML algorithms under the requirement of optimal learning rates. Next, we give a new bound for Hamiltonian simulation of dense Hamiltonians, a major subroutine of most known supervised QML algorithms, and then derive a classical algorithm with nearly the same complexity. We then draw the parallels to recent ‘quantum-inspired’ results, and will explain the implications of these results for quantum machine learning applications. Looking for areas which might bear larger advantages for QML algorithms, we finally propose a novel algorithm for Quantum Boltzmann machines, and argue that quantum algorithms for quantum data are one of the most promising applications for QML with potentially exponential advantage over classical approaches
Quantum machine learning: a classical perspective
Recently, increased computational power and data availability, as well as
algorithmic advances, have led machine learning techniques to impressive
results in regression, classification, data-generation and reinforcement
learning tasks. Despite these successes, the proximity to the physical limits
of chip fabrication alongside the increasing size of datasets are motivating a
growing number of researchers to explore the possibility of harnessing the
power of quantum computation to speed-up classical machine learning algorithms.
Here we review the literature in quantum machine learning and discuss
perspectives for a mixed readership of classical machine learning and quantum
computation experts. Particular emphasis will be placed on clarifying the
limitations of quantum algorithms, how they compare with their best classical
counterparts and why quantum resources are expected to provide advantages for
learning problems. Learning in the presence of noise and certain
computationally hard problems in machine learning are identified as promising
directions for the field. Practical questions, like how to upload classical
data into quantum form, will also be addressed.Comment: v3 33 pages; typos corrected and references adde
BayesNAS: A Bayesian Approach for Neural Architecture Search
One-Shot Neural Architecture Search (NAS) is a promising method to
significantly reduce search time without any separate training. It can be
treated as a Network Compression problem on the architecture parameters from an
over-parameterized network. However, there are two issues associated with most
one-shot NAS methods. First, dependencies between a node and its predecessors
and successors are often disregarded which result in improper treatment over
zero operations. Second, architecture parameters pruning based on their
magnitude is questionable. In this paper, we employ the classic Bayesian
learning approach to alleviate these two issues by modeling architecture
parameters using hierarchical automatic relevance determination (HARD) priors.
Unlike other NAS methods, we train the over-parameterized network for only one
epoch then update the architecture. Impressively, this enabled us to find the
architecture on CIFAR-10 within only 0.2 GPU days using a single GPU.
Competitive performance can be also achieved by transferring to ImageNet. As a
byproduct, our approach can be applied directly to compress convolutional
neural networks by enforcing structural sparsity which achieves extremely
sparse networks without accuracy deterioration.Comment: International Conference on Machine Learning 201
Recommended from our members
Quantum Algorithms for Matrix Problems and Machine Learning
This dissertation presents a study of quantum algorithms for problems that can be posed as matrix function tasks. In Chapter 1 we demonstrate a simple unifying framework for implementing of smooth functions of matrices on a quantum computer. This framework captures a variety of problems that can be solved by evaluating properties of some function of a matrix, and we identify speedups over classical algorithms for some problem classes. The analysis combines ideas from the classical theory of function approximation with the quantum algorithmic primitive of implementing linear combinations of unitary operators.
In Chapter 2 we continue this study by looking at the role of sparsity of input matrices in constructing efficient quantum algorithms. We show that classically pre-processing an input matrix by spectral sparsification can be profitable for quantum Hamiltonian simulation algorithms, without compromising the simulation error or complexity. Such preprocessing incurs a one time cost linear in the size of the matrix, but can be exploited to exponentially speed up subsequent subroutines such as inversion.
In Chapter 3, we give an application of this theory of matrix functions to the problem of estimating the Renyi entropy of an unknown quantum state. We combine matrix function techniques with mixed state quantum computation in the one-clean qubit model, and are able to bound of the expected runtime of our algorithm in terms of the unknown target quantity.
In addition to the theme of analysing the complexity of our algorithms, we also identify instances that are of practical relevance, leading us to some problems of machine learning. In Chapter 4 we investigate kernel based learning methods using random features. We work
with the QRAM input model suitable for big data, and show how matrix functions and the quantum Fourier transform can be used to devise a quantum algorithm for sampling random features that are optimised for given input data and choice of kernel. We obtain a potential exponential speedup over the best known classical algorithm even without explicit assumptions of sparsity or low rank.
Finally in Chapter 5 we consider the technique of beamsearch decoding used in natural language processing. We work in the query model, and show how quantum search with advice can be used to construct a quantum search decoder that can find the optimal parse (which may for instance be a best translation, or text-to-speech transcript) at least quadratically faster than the best known classical algorithms, and obtain super-quadratic speedups in the expected runtime.Science and Engineering Research Board (Department of Science and Technology), Government of Indi
Efficiency and Accuracy Enhancement of Intrusion Detection System Using Feature Selection and Cross-layer Mechanism
The dramatic increase in the number of connected devices and the significant growth of the network traffic data have led to many security vulnerabilities and cyber-attacks. Hence, developing new methods to secure the network infrastructure and protect data from malicious and unauthorized access becomes a vital aspect of communication network design. Intrusion Detection Systems (IDSs), as common widely used security techniques, are critical to detect network attacks and unauthorized network access and thus minimize further cyber-attack damages. However, there are a number of weaknesses that need to be addressed to make reliable IDS for real-world applications. One of the fundamental challenges is the large number of redundant and non-relevant data. Feature selection emerges as a necessary step in efficient IDS design to overcome high dimensionality problem and enhance the performance of IDS through the reduction of its complexity and the acceleration of the detection process. Moreover, detection algorithm has significant impact on the performance of IDS. Machine learning techniques are widely used in such systems which is studied in details in this dissertation. One of the most destructive activities in wireless networks such as MANET is packet dropping. The existence of the intrusive attackers in the network is not the only cause of packet loss. In fact, packet drop can occur because of faulty network. Hence, in order detect the packet dropping caused by a malicious activity of an attacker, information from various layers of the protocol is needed to detect malicious packet loss effectively. To this end, a novel cross-layer design for malicious packet loss detection in MANET is proposed using features from physical layer, network layer and MAC layer to make a better detection decision. Trust-based mechanism is adopted in this design and a packet loss free routing algorithm is presented accordingly
Intelligent Prognostic Framework for Degradation Assessment and Remaining Useful Life Estimation of Photovoltaic Module
All industrial systems and machines are subjected to degradation processes, which can be related to the operating conditions. This degradation can cause unwanted stops at any time and major maintenance work sometimes. The accurate prediction of the remaining useful life (RUL) is an important challenge in condition-based maintenance. Prognostic activity allows estimating the RUL before failure occurs and triggering actions to mitigate faults in time when needed. In this study, a new smart prognostic method for photovoltaic module health degradation was developed based on two approaches to achieve more accurate predictions: online diagnosis and data-driven prognosis. This framework of forecasting integrates the strengths of real-time monitoring in the first approach and relevant vector machine in the second. The results show that the proposed method is plausible due to its good prediction of RUL and can be effectively applied to many systems for monitoring and prognostics
Algorithms for Verification of Analog and Mixed-Signal Integrated Circuits
Over the past few decades, the tremendous growth in the complexity of analog and mixed-signal (AMS) systems has posed great challenges to AMS verification, resulting in a rapidly growing verification gap. Existing formal methods provide appealing completeness and reliability, yet they suffer from their limited efficiency and scalability. Data oriented machine learning based methods offer efficient and scalable solutions but do not guarantee completeness or full coverage. Additionally, the trend towards shorter time to market for AMS chips urges the development of efficient verification algorithms to accelerate with the joint design and testing phases.
This dissertation envisions a hierarchical and hybrid AMS verification framework by consolidating assorted algorithms to embrace efficiency, scalability and completeness in a statistical sense. Leveraging diverse advantages from various verification techniques, this dissertation develops algorithms in different categories.
In the context of formal methods, this dissertation proposes a generic and comprehensive model abstraction paradigm to model AMS content with a unifying analog representation. Moreover, an algorithm is proposed to parallelize reachability analysis by decomposing AMS systems into subsystems with lower complexity, and dividing the circuit's reachable state space exploration, which is formulated as a satisfiability problem, into subproblems with a reduced number of constraints. The proposed modeling method and the hierarchical parallelization enhance the efficiency and scalability of reachability analysis for AMS verification.
On the subject of learning based method, the dissertation proposes to convert the verification problem into a binary classification problem solved using support vector machine (SVM) based learning algorithms. To reduce the need of simulations for training sample collection, an active learning strategy based on probabilistic version space reduction is proposed to perform adaptive sampling. An expansion of the active learning strategy for the purpose of conservative prediction is leveraged to minimize the occurrence of false negatives.
Moreover, another learning based method is proposed to characterize AMS systems with a sparse Bayesian learning regression model. An implicit feature weighting mechanism based on the kernel method is embedded in the Bayesian learning model for concurrent quantification of influence of circuit parameters on the targeted specification, which can be efficiently solved in an iterative method similar to the expectation maximization (EM) algorithm. Besides, the achieved sparse parameter weighting offers favorable assistance to design analysis and test optimization
- …