3,523 research outputs found
Quantum machine learning: a classical perspective
Recently, increased computational power and data availability, as well as
algorithmic advances, have led machine learning techniques to impressive
results in regression, classification, data-generation and reinforcement
learning tasks. Despite these successes, the proximity to the physical limits
of chip fabrication alongside the increasing size of datasets are motivating a
growing number of researchers to explore the possibility of harnessing the
power of quantum computation to speed-up classical machine learning algorithms.
Here we review the literature in quantum machine learning and discuss
perspectives for a mixed readership of classical machine learning and quantum
computation experts. Particular emphasis will be placed on clarifying the
limitations of quantum algorithms, how they compare with their best classical
counterparts and why quantum resources are expected to provide advantages for
learning problems. Learning in the presence of noise and certain
computationally hard problems in machine learning are identified as promising
directions for the field. Practical questions, like how to upload classical
data into quantum form, will also be addressed.Comment: v3 33 pages; typos corrected and references adde
木を用いた構造化並列プログラミング
High-level abstractions for parallel programming are still immature. Computations on complicated data structures such as pointer structures are considered as irregular algorithms. General graph structures, which irregular algorithms generally deal with, are difficult to divide and conquer. Because the divide-and-conquer paradigm is essential for load balancing in parallel algorithms and a key to parallel programming, general graphs are reasonably difficult. However, trees lead to divide-and-conquer computations by definition and are sufficiently general and powerful as a tool of programming. We therefore deal with abstractions of tree-based computations. Our study has started from Matsuzaki’s work on tree skeletons. We have improved the usability of tree skeletons by enriching their implementation aspect. Specifically, we have dealt with two issues. We first have implemented the loose coupling between skeletons and data structures and developed a flexible tree skeleton library. We secondly have implemented a parallelizer that transforms sequential recursive functions in C into parallel programs that use tree skeletons implicitly. This parallelizer hides the complicated API of tree skeletons and makes programmers to use tree skeletons with no burden. Unfortunately, the practicality of tree skeletons, however, has not been improved. On the basis of the observations from the practice of tree skeletons, we deal with two application domains: program analysis and neighborhood computation. In the domain of program analysis, compilers treat input programs as control-flow graphs (CFGs) and perform analysis on CFGs. Program analysis is therefore difficult to divide and conquer. To resolve this problem, we have developed divide-and-conquer methods for program analysis in a syntax-directed manner on the basis of Rosen’s high-level approach. Specifically, we have dealt with data-flow analysis based on Tarjan’s formalization and value-graph construction based on a functional formalization. In the domain of neighborhood computations, a primary issue is locality. A naive parallel neighborhood computation without locality enhancement causes a lot of cache misses. The divide-and-conquer paradigm is known to be useful also for locality enhancement. We therefore have applied algebraic formalizations and a tree-segmenting technique derived from tree skeletons to the locality enhancement of neighborhood computations.電気通信大学201
A Survey of Symbolic Execution Techniques
Many security and software testing applications require checking whether
certain properties of a program hold for any possible usage scenario. For
instance, a tool for identifying software vulnerabilities may need to rule out
the existence of any backdoor to bypass a program's authentication. One
approach would be to test the program using different, possibly random inputs.
As the backdoor may only be hit for very specific program workloads, automated
exploration of the space of possible inputs is of the essence. Symbolic
execution provides an elegant solution to the problem, by systematically
exploring many possible execution paths at the same time without necessarily
requiring concrete inputs. Rather than taking on fully specified input values,
the technique abstractly represents them as symbols, resorting to constraint
solvers to construct actual instances that would cause property violations.
Symbolic execution has been incubated in dozens of tools developed over the
last four decades, leading to major practical breakthroughs in a number of
prominent software reliability applications. The goal of this survey is to
provide an overview of the main ideas, challenges, and solutions developed in
the area, distilling them for a broad audience.
The present survey has been accepted for publication at ACM Computing
Surveys. If you are considering citing this survey, we would appreciate if you
could use the following BibTeX entry: http://goo.gl/Hf5FvcComment: This is the authors pre-print copy. If you are considering citing
this survey, we would appreciate if you could use the following BibTeX entry:
http://goo.gl/Hf5Fv
A Perspective on Future Research Directions in Information Theory
Information theory is rapidly approaching its 70th birthday. What are
promising future directions for research in information theory? Where will
information theory be having the most impact in 10-20 years? What new and
emerging areas are ripe for the most impact, of the sort that information
theory has had on the telecommunications industry over the last 60 years? How
should the IEEE Information Theory Society promote high-risk new research
directions and broaden the reach of information theory, while continuing to be
true to its ideals and insisting on the intellectual rigor that makes its
breakthroughs so powerful? These are some of the questions that an ad hoc
committee (composed of the present authors) explored over the past two years.
We have discussed and debated these questions, and solicited detailed inputs
from experts in fields including genomics, biology, economics, and
neuroscience. This report is the result of these discussions
Surrogate Search As a Way to Combat Harmful Effects of Ill-behaved Evaluation Functions
Recently, several researchers have found that cost-based satisficing search
with A* often runs into problems. Although some "work arounds" have been
proposed to ameliorate the problem, there has been little concerted effort to
pinpoint its origin. In this paper, we argue that the origins of this problem
can be traced back to the fact that most planners that try to optimize cost
also use cost-based evaluation functions (i.e., f(n) is a cost estimate). We
show that cost-based evaluation functions become ill-behaved whenever there is
a wide variance in action costs; something that is all too common in planning
domains. The general solution to this malady is what we call a surrogatesearch,
where a surrogate evaluation function that doesn't directly track the cost
objective, and is resistant to cost-variance, is used. We will discuss some
compelling choices for surrogate evaluation functions that are based on size
rather that cost. Of particular practical interest is a cost-sensitive version
of size-based evaluation function -- where the heuristic estimates the size of
cheap paths, as it provides attractive quality vs. speed tradeoffsComment: arXiv admin note: substantial text overlap with arXiv:1103.368
Contributions to Time-bounded Problem Solving Using Knowledge-based Techniques
Time-bounded computations represent major challenge for knowledge-based techniques. Being primarily non-algorithmic in nature, such techniques suffer from obvious open-endedness in the sense that demands on time and other resources for a particular task cannot be predicted in advance. Consequently, efficiency of traditional knowledge-based techniques in solving time-bounded problems is not at all guaranteed. Artificial Intelligence researchers working in real-time problem solving have generally tried to avoid this difficulty by improving the speed of computation (through code optimisation or dedicated hardware) or using heuristics. However, most of these shortcuts are likely to be inappropriate or unsuitable in complicated real-time applications. Consequently, there is a need of more systematic and/or general measures. We propose a two-fold improvement over traditional knowledge-based techniques for tackling this problem. Firstly, that a cache-based architecture should be used in choosing the best alternative approach (when there are two or more) compatible to the time constraints. This cache differs from traditional caches, used in other branches of computer science, in the sense that it can hold not just "ready to use" values but also knowledge suggesting which AI technique will be most suitable to meet a temporal demand in a given context. The second improvement is in processing the cached knowledge itself. We propose a technique which can be called "knowledge interpolation" and which can be applied to different forms of knowledge (such as symbolic values, rules, cases) when the keys used for cache access do not make exact matches with the labels for any cell of the cache. The research reported in this thesis comprises development of cache-based architecture and interpolation techniques, studies of their requisites and representational issues and their complementary roles in achieving time-bounded performance. Ground operations control of an airport and allocating resources for short-wave radio communications are two domains in which our proposed methods are studied
Temporal Difference Learning in Complex Domains
PhDThis thesis adapts and improves on the methods of TD(k) (Sutton 1988) that were
successfully used for backgammon (Tesauro 1994) and applies them to other complex
games that are less amenable to simple pattem-matching approaches. The games
investigated are chess and shogi, both of which (unlike backgammon) require
significant amounts of computational effort to be expended on search in order to
achieve expert play. The improved methods are also tested in a non-game domain.
In the chess domain, the adapted TD(k) method is shown to successfully learn the
relative values of the pieces, and matches using these learnt piece values indicate that
they perform at least as well as piece values widely quoted in elementary chess books.
The adapted TD(X) method is also shown to work well in shogi, considered by many
researchers to be the next challenge for computer game-playing, and for which there
is no standardised set of piece values.
An original method to automatically set and adjust the major control parameters used
by TD(k) is presented. The main performance advantage comes from the learning
rate adjustment, which is based on a new concept called temporal coherence.
Experiments in both chess and a random-walk domain show that the temporal
coherence algorithm produces both faster learning and more stable values than both
human-chosen parameters and an earlier method for learning rate adjustment.
The methods presented in this thesis allow programs to learn with as little input of
external knowledge as possible, exploring the domain on their own rather than by
being taught. Further experiments show that the method is capable of handling many
hundreds of weights, and that it is not necessary to perform deep searches during the
leaming phase in order to learn effective weight
Proceedings of the 18th Irish Conference on Artificial Intelligence and Cognitive Science
These proceedings contain the papers that were accepted for publication at AICS-2007, the 18th Annual Conference on Artificial Intelligence and Cognitive Science, which was held in the Technological University Dublin; Dublin, Ireland; on the 29th to the 31st August 2007. AICS is the annual conference of the Artificial Intelligence Association of Ireland (AIAI)
- …