12,806 research outputs found
Inference in classifier systems
Classifier systems (Css) provide a rich framework for learning and induction, and they have beenı successfully applied in the artificial intelligence literature for some time. In this paper, both theı architecture and the inferential mechanisms in general CSs are reviewed, and a number of limitations and extensions of the basic approach are summarized. A system based on the CS approach that is capable of quantitative data analysis is outlined and some of its peculiarities discussed
Estimating operator norms using covering nets
We present several polynomial- and quasipolynomial-time approximation schemes
for a large class of generalized operator norms. Special cases include the
norm of matrices for , the support function of the set of
separable quantum states, finding the least noisy output of
entanglement-breaking quantum channels, and approximating the injective tensor
norm for a map between two Banach spaces whose factorization norm through
is bounded.
These reproduce and in some cases improve upon the performance of previous
algorithms by Brand\~ao-Christandl-Yard and followup work, which were based on
the Sum-of-Squares hierarchy and whose analysis used techniques from quantum
information such as the monogamy principle of entanglement. Our algorithms, by
contrast, are based on brute force enumeration over carefully chosen covering
nets. These have the advantage of using less memory, having much simpler proofs
and giving new geometric insights into the problem. Net-based algorithms for
similar problems were also presented by Shi-Wu and Barak-Kelner-Steurer, but in
each case with a run-time that is exponential in the rank of some matrix. We
achieve polynomial or quasipolynomial runtimes by using the much smaller nets
that exist in spaces. This principle has been used in learning theory,
where it is known as Maurey's empirical method.Comment: 24 page
Software-Architecture Recovery from Machine Code
In this paper, we present a tool, called Lego, which recovers object-oriented software architecture from stripped binaries. Lego takes a stripped binary as input, and uses information obtained from dynamic analysis to (i) group the functions in the binary into classes, and (ii) identify inheritance and composition relationships between the inferred classes. The information obtained by Lego can be used for reengineering legacy software, and for understanding the architecture of software systems that lack documentation and source code. Our experiments show that the class hierarchies recovered by Lego have a high degree of agreement---measured in terms of precision and recall---with the hierarchy defined in the source code
PASS: a simple classifier system for data analysis
Let x be a vector of predictors and y a scalar response associated with it. Consider the regression problem of inferring the relantionship between predictors and response on the basis of a sample of observed pairs (x,y). This is a familiar problem for which a variety of methods are available. This paper describes a new method based on the classifier system approach to problem solving. Classifier systems provide a rich framework for learning and induction, and they have been suc:cessfully applied in the artificial intelligence literature for some time. The present method emiches the simplest classifier system architecture with some new heuristic and explores its potential in a purely inferential context. A prototype called PASS (Predictive Adaptative Sequential System) has been built to test these ideas empirically. Preliminary Monte Carlo experiments indicate that PASS is able to discover the structure imposed on the data in a wide array of cases
Route Planning in Transportation Networks
We survey recent advances in algorithms for route planning in transportation
networks. For road networks, we show that one can compute driving directions in
milliseconds or less even at continental scale. A variety of techniques provide
different trade-offs between preprocessing effort, space requirements, and
query time. Some algorithms can answer queries in a fraction of a microsecond,
while others can deal efficiently with real-time traffic. Journey planning on
public transportation systems, although conceptually similar, is a
significantly harder problem due to its inherent time-dependent and
multicriteria nature. Although exact algorithms are fast enough for interactive
queries on metropolitan transit systems, dealing with continent-sized instances
requires simplifications or heavy preprocessing. The multimodal route planning
problem, which seeks journeys combining schedule-based transportation (buses,
trains) with unrestricted modes (walking, driving), is even harder, relying on
approximate solutions even for metropolitan inputs.Comment: This is an updated version of the technical report MSR-TR-2014-4,
previously published by Microsoft Research. This work was mostly done while
the authors Daniel Delling, Andrew Goldberg, and Renato F. Werneck were at
Microsoft Research Silicon Valle
Hierarchical Path Search with Partial Materialization of Costs for a Smart Wheelchair
In this paper, the off-line path planner module of a smart wheelchair aided navigation
system is described. Environmental information is structured into a hierarchical graph (H-graph) and
used either by the user interface or the path planner module. This information structure facilitates
efficient path search and easier information access and retrieval. Special path planning issues like
planning between floors of a building (vertical path planning) are also viewed. The H-graph proposed
is modelled by a tree. The hierarchy of abstractions contained in the tree has several levels of detail.
Each abstraction level is a graph whose nodes can represent other graphs in a deeper level of the
hierarchy. Path planning is performed using a path skeleton which is built from the deepest
abstraction levels of the hierarchy to the most upper levels and completed in the last step of the
algorithm. In order not to lose accuracy in the path skeleton generation and speed up the search, a set
of optimal subpaths are previously stored in some nodes of the H-graph (path costs are partially
materialized). Finally, some experimental results are showed and compared to traditional heuristic
search algorithms used in robot path planning.ComisiĂłn Interministerial de Ciencia y TecnologĂa TER96-2056-C02-0
City networks in cyberspace and time : using Google hyperlinks to measure global economic and environmental crises
Geographers and social scientists have long been interested in ranking and classifying the cities of the world. The cutting edge of this research is characterized by a recognition of the crucial
importance of information and, specifically, ICTs to cities’ positions in the current Knowledge Economy. This chapter builds on recent “cyberspace” analyses of the global urban system by arguing for, and demonstrating empirically, the value of Web search engine data as a means of understanding cities as situated within, and constituted by, flows of digital information. To this end, we show how the Google search engine can be used to specify a dynamic, informational
classification of North American cities based on both the production and the consumption of Web information about two prominent current issues global in scope: the global financial crisis, and global climate change
First-principles molecular structure search with a genetic algorithm
The identification of low-energy conformers for a given molecule is a
fundamental problem in computational chemistry and cheminformatics. We assess
here a conformer search that employs a genetic algorithm for sampling the
low-energy segment of the conformation space of molecules. The algorithm is
designed to work with first-principles methods, facilitated by the
incorporation of local optimization and blacklisting conformers to prevent
repeated evaluations of very similar solutions. The aim of the search is not
only to find the global minimum, but to predict all conformers within an energy
window above the global minimum. The performance of the search strategy is: (i)
evaluated for a reference data set extracted from a database with amino acid
dipeptide conformers obtained by an extensive combined force field and
first-principles search and (ii) compared to the performance of a systematic
search and a random conformer generator for the example of a drug-like ligand
with 43 atoms, 8 rotatable bonds and 1 cis/trans bond
- …