39,307 research outputs found
Connectionist Theory Refinement: Genetically Searching the Space of Network Topologies
An algorithm that learns from a set of examples should ideally be able to
exploit the available resources of (a) abundant computing power and (b)
domain-specific knowledge to improve its ability to generalize. Connectionist
theory-refinement systems, which use background knowledge to select a neural
network's topology and initial weights, have proven to be effective at
exploiting domain-specific knowledge; however, most do not exploit available
computing power. This weakness occurs because they lack the ability to refine
the topology of the neural networks they produce, thereby limiting
generalization, especially when given impoverished domain theories. We present
the REGENT algorithm which uses (a) domain-specific knowledge to help create an
initial population of knowledge-based neural networks and (b) genetic operators
of crossover and mutation (specifically designed for knowledge-based networks)
to continually search for better network topologies. Experiments on three
real-world domains indicate that our new algorithm is able to significantly
increase generalization compared to a standard connectionist theory-refinement
system, as well as our previous algorithm for growing knowledge-based networks.Comment: See http://www.jair.org/ for any accompanying file
Uncovering unknown unknowns: towards a Baconian approach to management decision-making
Bayesian decision theory and inference have left a deep and indelible mark on the literature on management decision-making. There is however an important issue that the machinery of classical Bayesianism is ill equipped to deal with, that of āunknown unknownsā or, in the cases in which they are actualised, what are sometimes called āBlack Swansā. This issue is closely related to the problems of constructing an appropriate state space under conditions of deficient foresight about what the future might hold, and our aim is to develop a theory and some of the practicalities of state space elaboration that addresses these problems. Building on ideas originally put forward by Bacon (1620), we show how our approach can be used to build and explore the state space, how it may reduce the extent to which organisations are blindsided by Black Swans, and how it ameliorates various well-known cognitive biases
Zero-Shot Hashing via Transferring Supervised Knowledge
Hashing has shown its efficiency and effectiveness in facilitating
large-scale multimedia applications. Supervised knowledge e.g. semantic labels
or pair-wise relationship) associated to data is capable of significantly
improving the quality of hash codes and hash functions. However, confronted
with the rapid growth of newly-emerging concepts and multimedia data on the
Web, existing supervised hashing approaches may easily suffer from the scarcity
and validity of supervised information due to the expensive cost of manual
labelling. In this paper, we propose a novel hashing scheme, termed
\emph{zero-shot hashing} (ZSH), which compresses images of "unseen" categories
to binary codes with hash functions learned from limited training data of
"seen" categories. Specifically, we project independent data labels i.e.
0/1-form label vectors) into semantic embedding space, where semantic
relationships among all the labels can be precisely characterized and thus seen
supervised knowledge can be transferred to unseen classes. Moreover, in order
to cope with the semantic shift problem, we rotate the embedded space to more
suitably align the embedded semantics with the low-level visual feature space,
thereby alleviating the influence of semantic gap. In the meantime, to exert
positive effects on learning high-quality hash functions, we further propose to
preserve local structural property and discrete nature in binary codes.
Besides, we develop an efficient alternating algorithm to solve the ZSH model.
Extensive experiments conducted on various real-life datasets show the superior
zero-shot image retrieval performance of ZSH as compared to several
state-of-the-art hashing methods.Comment: 11 page
Proceedings of the 11th European Agent Systems Summer School Student Session
This volume contains the papers presented at the Student Session of the 11th European Agent Systems Summer School (EASSS) held on 2nd of September 2009 at Educatorio della Providenza, Turin, Italy. The Student Session, organised by students, is designed to encourage student interaction and feedback from the tutors. By providing the students with a conference-like setup, both in the presentation and in the review process, students have the opportunity to prepare their own submission, go through the selection process and present their work to each other and their interests to their fellow students as well as internationally leading experts in the agent field, both from the theoretical and the practical sector. Table of Contents: Andrew Koster, Jordi Sabater Mir and Marco Schorlemmer, Towards an inductive algorithm for learning trust alignment . . . 5; Angel Rolando Medellin, Katie Atkinson and Peter McBurney, A Preliminary Proposal for Model Checking Command Dialogues. . . 12; Declan Mungovan, Enda Howley and Jim Duggan, Norm Convergence in Populations of Dynamically Interacting Agents . . . 19; Akın GĆ¼nay, Argumentation on Bayesian Networks for Distributed Decision Making . . 25; Michael Burkhardt, Marco Luetzenberger and Nils Masuch, Towards Toolipse 2: Tool Support for the JIAC V Agent Framework . . . 30; Joseph El Gemayel, The Tenacity of Social Actors . . . 33; Cristian Gratie, The Impact of Routing on Traffic Congestion . . . 36; Andrei-Horia Mogos and Monica Cristina Voinescu, A Rule-Based Psychologist Agent for Improving the Performances of a Sportsman . . . 39; --Autonomer Agent,Agent,KĆ¼nstliche Intelligenz
DISCOVERING INTERESTING PATTERNS FOR INVESTMENT DECISION MAKING WITH GLOWER C - A GENETIC LEARNER OVERLAID WITH ENTROPY REDUCTION
Prediction in financial domains is notoriously difficult for a number of reasons. First, theories tend to be
weak or non-existent, which makes problem formulation open-ended by forcing us to consider a large
number of independent variables and thereby increasing the dimensionality of the search space. Second, the
weak relationships among variables tend to be nonlinear, and may hold only in limited areas of the search
space. Third, in financial practice, where analysts conduct extensive manual analysis of historically well
performing indicators, a key is to find the hidden interactions among variables that perform well in
combination. Unfortunately, these are exactly the patterns that the greedy search biases incorporated by
many standard rule algorithms will miss. In this paper, we describe and evaluate several variations of a new
genetic learning algorithm (GLOWER) on a variety of data sets. The design of GLOWER has been motivated
by financial prediction problems, but incorporates successful ideas from tree induction and rule learning.
We examine the performance of several GLOWER variants on two UCI data sets as well as on a standard
financial prediction problem (S&P500 stock returns), using the results to identify and use one of the better
variants for further comparisons. We introduce a new (to KDD) financial prediction problem (predicting
positive and negative earnings surprises), and experiment withGLOWER, contrasting it with tree- and rule-induction
approaches. Our results are encouraging, showing that GLOWER has the ability to uncover
effective patterns for difficult problems that have weak structure and significant nonlinearities.Information Systems Working Papers Serie
Rationality in discovery : a study of logic, cognition, computation and neuropharmacology
Part I Introduction
The specific problem adressed in this thesis is: what is the rational use of theory and experiment in the process of scientific discovery, in theory and in the practice of drug research for Parkinsonās disease? The thesis aims to answer the following specific questions: what is: 1) the structure of a theory?; 2) the process of scientific reasoning?; 3) the route between theory and experiment? In the first part I further discuss issues about rationality in science as introduction to part II, and I present an overview of my case-study of neuropharmacology, for which I interviewed researchers from the Groningen Pharmacy Department, as an introduction to part III.
Part II Discovery
In this part I discuss three theoretical models of scientific discovery according to studies in the fields of Logic, Cognition, and Computation. In those fields the structure of a theory is respectively explicated as: a set of sentences; a set of associated memory chunks; and as a computer program that can generate the observed data. Rationality in discovery is characterized by: finding axioms that imply observation sentences; heuristic search for a hypothesis, as part of problem solving, by applying memory chunks and production rules that represent skill; and finding the shortest program that generates the data, respectively. I further argue that reasoning in discovery includes logical fallacies, which are neccesary to introduce new hypotheses. I also argue that, while human subjects often make errors in hypothesis evaluation tasks from a logical perspective, these evaluations are rational given a probabilistic interpretation.
Part III Neuropharmacology
In this last part I discusses my case-study and a model of discovery in a practice of drug research for Parkinsonās disease. I discuss the dopamine theory of Parkinsonās disease and model its structure as a qualitative differential equation. Then I discuss the use and reasons for particular experiments to both test a drug and explore the function of the brain. I describe different kinds of problems in drug research leading to a discovery. Based on that description I distinguish three kinds of reasoning tasks in discovery, inference to: the best explanation, the best prediction and the best intervention.
I further demonstrate how a part of reasoning in neuropharmacology can be
computationally modeled as qualitative reasoning, and aided by a computer supported
discovery system
Remote Sensing Information Sciences Research Group, Santa Barbara Information Sciences Research Group, year 3
Research continues to focus on improving the type, quantity, and quality of information which can be derived from remotely sensed data. The focus is on remote sensing and application for the Earth Observing System (Eos) and Space Station, including associated polar and co-orbiting platforms. The remote sensing research activities are being expanded, integrated, and extended into the areas of global science, georeferenced information systems, machine assissted information extraction from image data, and artificial intelligence. The accomplishments in these areas are examined
Identifying Mislabeled Training Data
This paper presents a new approach to identifying and eliminating mislabeled
training instances for supervised learning. The goal of this approach is to
improve classification accuracies produced by learning algorithms by improving
the quality of the training data. Our approach uses a set of learning
algorithms to create classifiers that serve as noise filters for the training
data. We evaluate single algorithm, majority vote and consensus filters on five
datasets that are prone to labeling errors. Our experiments illustrate that
filtering significantly improves classification accuracy for noise levels up to
30 percent. An analytical and empirical evaluation of the precision of our
approach shows that consensus filters are conservative at throwing away good
data at the expense of retaining bad data and that majority filters are better
at detecting bad data at the expense of throwing away good data. This suggests
that for situations in which there is a paucity of data, consensus filters are
preferable, whereas majority vote filters are preferable for situations with an
abundance of data
- ā¦