24 research outputs found
Recommended from our members
On the Learnability of Monotone Functions
A longstanding lacuna in the field of computational learning theory is the learnability of succinctly representable monotone Boolean functions, i.e., functions that preserve the given order of the input. This thesis makes significant progress towards understanding both the possibilities and the limitations of learning various classes of monotone functions by carefully considering the complexity measures used to evaluate them. We show that Boolean functions computed by polynomial-size monotone circuits are hard to learn assuming the existence of one-way functions. Having shown the hardness of learning general polynomial-size monotone circuits, we show that the class of Boolean functions computed by polynomial-size depth-3 monotone circuits are hard to learn using statistical queries. As a counterpoint, we give a statistical query learning algorithm that can learn random polynomial-size depth-2 monotone circuits (i.e., monotone DNF formulas). As a preliminary step towards a fully polynomial-time, proper learning algorithm for learning polynomial-size monotone decision trees, we also show the relationship between the average depth of a monotone decision tree, its average sensitivity, and its variance. Finally, we return to monotone DNF formulas, and we show that they are teachable (a different model of learning) in the average case. We also show that non-monotone DNF formulas, juntas, and sparse GF2 formulas are teachable in the average case
Recent Developments in Algorithmic Teaching
Abstract. The present paper surveys recent developments in algorith-mic teaching. First, the traditional teaching dimension model is recalled. Starting from the observation that the teaching dimension model some-times leads to counterintuitive results, recently developed approaches are presented. Here, main emphasis is put on the following aspects derived from human teaching/learning behavior: the order in which examples are presented should matter; teaching should become harder when the memory size of the learners decreases; teaching should become easier if the learners provide feedback; and it should be possible to teach infinite concepts and/or finite and infinite concept classes. Recent developments in the algorithmic teaching achieving (some) of these aspects are presented and compared.
Off-line simulation inspires insight: a neurodynamics approach to efficient robot task learning
There is currently an increasing demand for robots able to acquire the sequential organization of tasks from social learning interactions with ordinary people. Interactive learning-by-demonstration and communication is a promising research topic in current robotics research. However, the efficient acquisition of generalized task representations that allow the robot to adapt to different users and contexts is a major challenge. In this paper, we present a dynamic neural field (DNF) model that is inspired by the hypothesis that the nervous system uses the off-line re-activation of initial memory traces to incrementally incorporate new information into structured knowledge. To achieve this, the model combines fast activation-based learning to robustly represent sequential information from single task demonstrations with slower, weight-based learning during internal simulations to establish longer-term associations between neural populations representing individual subtasks. The efficiency of the learning process is tested in an assembly paradigm in which the humanoid robot ARoS learns to construct a toy vehicle from its parts. User demonstrations with different serial orders together with the correction of initial prediction errors allow the robot to acquire generalized task knowledge about possible serial orders and the longer term dependencies between subgoals in very few social learning interactions. This success is shown in a joint action scenario in which ARoS uses the newly acquired assembly plan to construct the toy together with a human partner.The work was funded by FCT - Fundacao para a Ciencia e Tecnologia, through the PhD Grants SFRH/BD/48529/2008 and SFRH/BD/41179/2007 and Project NETT: Neural Engineering Transformative Technologies, EU-FP7 ITN (nr. 289146) and the FCT-Research Center CMAT (PEst-OE/MAT/UI0013/2014)
Average-case Complexity of Teaching Convex Polytopes via Halfspace Queries
We examine the task of locating a target region among those induced by intersections of n halfspaces in R^d. This generic task connects to fundamental machine learning problems, such as training a perceptron and learning a ϕ-separable dichotomy. We investigate the average teaching complexity of the task, i.e., the minimal number of samples (halfspace queries) required by a teacher to help a version-space learner in locating a randomly selected target. As our main result, we show that the average-case teaching complexity is Θ(d), which is in sharp contrast to the worst-case teaching complexity of Θ(n). If instead, we consider the average-case learning complexity, the bounds have a dependency on n as Θ(n) for i.i.d. queries and Θ(dlog(n)) for actively chosen queries by the learner. Our proof techniques are based on novel insights from computational geometry, which allow us to count the number of convex polytopes and faces in a Euclidean space depending on the arrangement of halfspaces. Our insights allow us to establish a tight bound on the average-case complexity for ϕ-separable dichotomies, which generalizes the known O(d) bound on the average number of "extreme patterns" in the classical computational geometry literature (Cover, 1965)
Seattle Pacific College Catalog 1965-1966
https://digitalcommons.spu.edu/archives_catalogs/1024/thumbnail.jp