2,690,293 research outputs found
Recommended from our members
Incremental learning of independent, overlapping, and graded concept descriptions with an instance-based process framework
Supervised learning algorithms make several simplifying assumptions concerning the characteristics of the concept descriptions to be learned. For example, concepts are often assumed to be (1) defined with respect to the same set of relevant attributes, (2) disjoint in instance space, and (3) have uniform instance distributions. While these assumptions constrain the learning task, they unfortunately limit an algorithm's applicability. We believe that supervised learning algorithms should learn attribute relevancies independently for each concept, allow instances to be members of any subset of concepts, and represent graded concept descriptions. This paper introduces a process framework for instance-based learning algorithms that exploit only specific instance and performance feedback information to guide their concept learning processes. We also introduce Bloom, a specific instantiation of this framework. Bloom is a supervised, incremental, instance-based learning algorithm that learns relative attribute relevancies independently for each concept, allows instances to be members of any subset of concepts, and represents graded concept memberships. We describe empirical evidence to support our claims that Bloom can learn independent, overlapping, and graded concept descriptions
Concept Learning with Energy-Based Models
Many hallmarks of human intelligence, such as generalizing from limited
experience, abstract reasoning and planning, analogical reasoning, creative
problem solving, and capacity for language require the ability to consolidate
experience into concepts, which act as basic building blocks of understanding
and reasoning. We present a framework that defines a concept by an energy
function over events in the environment, as well as an attention mask over
entities participating in the event. Given few demonstration events, our method
uses inference-time optimization procedure to generate events involving similar
concepts or identify entities involved in the concept. We evaluate our
framework on learning visual, quantitative, relational, temporal concepts from
demonstration events in an unsupervised manner. Our approach is able to
successfully generate and identify concepts in a few-shot setting and resulting
learned concepts can be reused across environments. Example videos of our
results are available at sites.google.com/site/energyconceptmodel
Learning relationships from theory to design
This paper attempts to bridge the psychological and anthropological views of situated learning by focusing on the concept of a learning relationship, and by exploiting this concept in our framework for the design of learning technology. We employ Wenger's (1998) concept of communities of practice to give emphasis to social identification as a central aspect of learning, which should crucially influence our thinking about the design of learning environments. We describe learning relationships in terms of form (one‐to‐one, one‐to‐many etc.), nature (explorative, formative and comparative), distance (first‐, second‐order), and context, and we describe a first attempt at an empirical approach to their identification and measurement
Learning with a Drifting Target Concept
We study the problem of learning in the presence of a drifting target
concept. Specifically, we provide bounds on the error rate at a given time,
given a learner with access to a history of independent samples labeled
according to a target concept that can change on each round. One of our main
contributions is a refinement of the best previous results for polynomial-time
algorithms for the space of linear separators under a uniform distribution. We
also provide general results for an algorithm capable of adapting to a variable
rate of drift of the target concept. Some of the results also describe an
active learning variant of this setting, and provide bounds on the number of
queries for the labels of points in the sequence sufficient to obtain the
stated bounds on the error rates
Characterizing the Sample Complexity of Private Learners
In 2008, Kasiviswanathan et al. defined private learning as a combination of
PAC learning and differential privacy. Informally, a private learner is applied
to a collection of labeled individual information and outputs a hypothesis
while preserving the privacy of each individual. Kasiviswanathan et al. gave a
generic construction of private learners for (finite) concept classes, with
sample complexity logarithmic in the size of the concept class. This sample
complexity is higher than what is needed for non-private learners, hence
leaving open the possibility that the sample complexity of private learning may
be sometimes significantly higher than that of non-private learning.
We give a combinatorial characterization of the sample size sufficient and
necessary to privately learn a class of concepts. This characterization is
analogous to the well known characterization of the sample complexity of
non-private learning in terms of the VC dimension of the concept class. We
introduce the notion of probabilistic representation of a concept class, and
our new complexity measure RepDim corresponds to the size of the smallest
probabilistic representation of the concept class.
We show that any private learning algorithm for a concept class C with sample
complexity m implies RepDim(C)=O(m), and that there exists a private learning
algorithm with sample complexity m=O(RepDim(C)). We further demonstrate that a
similar characterization holds for the database size needed for privately
computing a large class of optimization problems and also for the well studied
problem of private data release
- …
