212,610 research outputs found

    Facial Expression Recognition

    Get PDF

    Acquiring Word-Meaning Mappings for Natural Language Interfaces

    Full text link
    This paper focuses on a system, WOLFIE (WOrd Learning From Interpreted Examples), that acquires a semantic lexicon from a corpus of sentences paired with semantic representations. The lexicon learned consists of phrases paired with meaning representations. WOLFIE is part of an integrated system that learns to transform sentences into representations such as logical database queries. Experimental results are presented demonstrating WOLFIE's ability to learn useful lexicons for a database interface in four different natural languages. The usefulness of the lexicons learned by WOLFIE are compared to those acquired by a similar system, with results favorable to WOLFIE. A second set of experiments demonstrates WOLFIE's ability to scale to larger and more difficult, albeit artificially generated, corpora. In natural language acquisition, it is difficult to gather the annotated data needed for supervised learning; however, unannotated data is fairly plentiful. Active learning methods attempt to select for annotation and training only the most informative examples, and therefore are potentially very useful in natural language applications. However, most results to date for active learning have only considered standard classification tasks. To reduce annotation effort while maintaining accuracy, we apply active learning to semantic lexicons. We show that active learning can significantly reduce the number of annotated examples required to achieve a given level of performance

    A conceptual architecture for interactive educational multimedia

    Get PDF
    Learning is more than knowledge acquisition; it often involves the active participation of the learner in a variety of knowledge- and skills-based learning and training activities. Interactive multimedia technology can support the variety of interaction channels and languages required to facilitate interactive learning and teaching. A conceptual architecture for interactive educational multimedia can support the development of such multimedia systems. Such an architecture needs to embed multimedia technology into a coherent educational context. A framework based on an integrated interaction model is needed to capture learning and training activities in an online setting from an educational perspective, to describe them in the human-computer context, and to integrate them with mechanisms and principles of multimedia interaction

    The future of technology enhanced active learning – a roadmap

    Get PDF
    The notion of active learning refers to the active involvement of learner in the learning process, capturing ideas of learning-by-doing and the fact that active participation and knowledge construction leads to deeper and more sustained learning. Interactivity, in particular learnercontent interaction, is a central aspect of technology-enhanced active learning. In this roadmap, the pedagogical background is discussed, the essential dimensions of technology-enhanced active learning systems are outlined and the factors that are expected to influence these systems currently and in the future are identified. A central aim is to address this promising field from a best practices perspective, clarifying central issues and formulating an agenda for future developments in the form of a roadmap

    A Meta-Learning Approach to One-Step Active Learning

    Full text link
    We consider the problem of learning when obtaining the training labels is costly, which is usually tackled in the literature using active-learning techniques. These approaches provide strategies to choose the examples to label before or during training. These strategies are usually based on heuristics or even theoretical measures, but are not learned as they are directly used during training. We design a model which aims at \textit{learning active-learning strategies} using a meta-learning setting. More specifically, we consider a pool-based setting, where the system observes all the examples of the dataset of a problem and has to choose the subset of examples to label in a single shot. Experiments show encouraging results

    Proceedings of the Workshop Semantic Content Acquisition and Representation (SCAR) 2007

    Get PDF
    This is the proceedings of the Workshop on Semantic Content Acquisition and Representation, held in conjunction with NODALIDA 2007, on May 24 2007 in Tartu, Estonia.</p

    Classification systems offer a microcosm of issues in conceptual processing: A commentary on Kemmerer (2016)

    Get PDF
    This is a commentary on Kemmerer (2016), Categories of Object Concepts Across Languages and Brains: The Relevance of Nominal Classification Systems to Cognitive Neuroscience, DOI: 10.1080/23273798.2016.1198819

    Symbol Emergence in Robotics: A Survey

    Full text link
    Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people. It is very important to obtain a computational understanding of how humans can form a symbol system and obtain semiotic skills through their autonomous mental development. Recently, many studies have been conducted on the construction of robotic systems and machine-learning methods that can learn the use of language through embodied multimodal interaction with their environment and other systems. Understanding human social interactions and developing a robot that can smoothly communicate with human users in the long term, requires an understanding of the dynamics of symbol systems and is crucially important. The embodied cognition and social interaction of participants gradually change a symbol system in a constructive manner. In this paper, we introduce a field of research called symbol emergence in robotics (SER). SER is a constructive approach towards an emergent symbol system. The emergent symbol system is socially self-organized through both semiotic communications and physical interactions with autonomous cognitive developmental agents, i.e., humans and developmental robots. Specifically, we describe some state-of-art research topics concerning SER, e.g., multimodal categorization, word discovery, and a double articulation analysis, that enable a robot to obtain words and their embodied meanings from raw sensory--motor information, including visual information, haptic information, auditory information, and acoustic speech signals, in a totally unsupervised manner. Finally, we suggest future directions of research in SER.Comment: submitted to Advanced Robotic

    From images via symbols to contexts: using augmented reality for interactive model acquisition

    Get PDF
    Systems that perform in real environments need to bind the internal state to externally perceived objects, events, or complete scenes. How to learn this correspondence has been a long standing problem in computer vision as well as artificial intelligence. Augmented Reality provides an interesting perspective on this problem because a human user can directly relate displayed system results to real environments. In the following we present a system that is able to bootstrap internal models from user-system interactions. Starting from pictorial representations it learns symbolic object labels that provide the basis for storing observed episodes. In a second step, more complex relational information is extracted from stored episodes that enables the system to react on specific scene contexts
    corecore