101,419 research outputs found
Invariant Models for Causal Transfer Learning
Methods of transfer learning try to combine knowledge from several related
tasks (or domains) to improve performance on a test task. Inspired by causal
methodology, we relax the usual covariate shift assumption and assume that it
holds true for a subset of predictor variables: the conditional distribution of
the target variable given this subset of predictors is invariant over all
tasks. We show how this assumption can be motivated from ideas in the field of
causality. We focus on the problem of Domain Generalization, in which no
examples from the test task are observed. We prove that in an adversarial
setting using this subset for prediction is optimal in Domain Generalization;
we further provide examples, in which the tasks are sufficiently diverse and
the estimator therefore outperforms pooling the data, even on average. If
examples from the test task are available, we also provide a method to transfer
knowledge from the training tasks and exploit all available features for
prediction. However, we provide no guarantees for this method. We introduce a
practical method which allows for automatic inference of the above subset and
provide corresponding code. We present results on synthetic data sets and a
gene deletion data set
Invariance of visual operations at the level of receptive fields
Receptive field profiles registered by cell recordings have shown that
mammalian vision has developed receptive fields tuned to different sizes and
orientations in the image domain as well as to different image velocities in
space-time. This article presents a theoretical model by which families of
idealized receptive field profiles can be derived mathematically from a small
set of basic assumptions that correspond to structural properties of the
environment. The article also presents a theory for how basic invariance
properties to variations in scale, viewing direction and relative motion can be
obtained from the output of such receptive fields, using complementary
selection mechanisms that operate over the output of families of receptive
fields tuned to different parameters. Thereby, the theory shows how basic
invariance properties of a visual system can be obtained already at the level
of receptive fields, and we can explain the different shapes of receptive field
profiles found in biological vision from a requirement that the visual system
should be invariant to the natural types of image transformations that occur in
its environment.Comment: 40 pages, 17 figure
A review of domain adaptation without target labels
Domain adaptation has become a prominent problem setting in machine learning
and related fields. This review asks the question: how can a classifier learn
from a source domain and generalize to a target domain? We present a
categorization of approaches, divided into, what we refer to as, sample-based,
feature-based and inference-based methods. Sample-based methods focus on
weighting individual observations during training based on their importance to
the target domain. Feature-based methods revolve around on mapping, projecting
and representing features such that a source classifier performs well on the
target domain and inference-based methods incorporate adaptation into the
parameter estimation procedure, for instance through constraints on the
optimization procedure. Additionally, we review a number of conditions that
allow for formulating bounds on the cross-domain generalization error. Our
categorization highlights recurring ideas and raises questions important to
further research.Comment: 20 pages, 5 figure
Didactic Networks: A proposal for e-learning content generation
The Didactic Networks proposed in this paper are based on previous publications in the field of the RSR (Rhetorical-Semantic Relations). The RSR is a set of primitive relations used for building a specific kind of semantic networks for artificial intelligence applications on the web: the RSN (Rhetorical-Semantic Networks). We bring into focus the RSR application in the field of elearning, by defining Didactic Networks as a new set of semantic patterns oriented to the development of eleaming applications. The different lines we offer in our research Jail mainly into three levels: • The most basic one is in the field of computational linguistics and related to Logical Operations on RSR (RSR Inverses and plurals. RSR combinations, etc), once they have been created. The application of Walter Bosma 's results regarding rhetorical distance application and treatment as semantic weighted networks is one of the important issues here. • In parallel, we have been working on the creation of a knowledge representation and storage model and data architecture capable of supporting the definition of knowledge networks based on RSR. • The third strategic line is in the meso-level, the formulation of a molecular structure of knowledge based on the most frequently used patterns. The main contribution at this level is the set of Fundamental Cognitive Networks (FCN) as an application of Novak's mental maps proposal. This paper is part of this third intermediate level, and the Fundamental Didactic Networks (FDN) are the result of the application of rhetorical theoiy procedures to the instructional theory. We have formulated a general set of RSR capable of building discourse, making it possible to express any concept, procedure or principle in terms of knowledge nodes and RSRs. The instructional knowledge can then be elaborated in the same way. This network structure expressing the instructional knowledge in terms of RSR makes the objective of developing web-learning lessons semi-automutkally possible, as well as any other type of utilities oriented towards the exploitation of semantic structure, such as the automatic question answering systems
- …