16,273 research outputs found

    Learning Social Affordance Grammar from Videos: Transferring Human Interactions to Human-Robot Interactions

    Full text link
    In this paper, we present a general framework for learning social affordance grammar as a spatiotemporal AND-OR graph (ST-AOG) from RGB-D videos of human interactions, and transfer the grammar to humanoids to enable a real-time motion inference for human-robot interaction (HRI). Based on Gibbs sampling, our weakly supervised grammar learning can automatically construct a hierarchical representation of an interaction with long-term joint sub-tasks of both agents and short term atomic actions of individual agents. Based on a new RGB-D video dataset with rich instances of human interactions, our experiments of Baxter simulation, human evaluation, and real Baxter test demonstrate that the model learned from limited training data successfully generates human-like behaviors in unseen scenarios and outperforms both baselines.Comment: The 2017 IEEE International Conference on Robotics and Automation (ICRA

    Robust Grammatical Analysis for Spoken Dialogue Systems

    Full text link
    We argue that grammatical analysis is a viable alternative to concept spotting for processing spoken input in a practical spoken dialogue system. We discuss the structure of the grammar, and a model for robust parsing which combines linguistic sources of information and statistical sources of information. We discuss test results suggesting that grammatical processing allows fast and accurate processing of spoken input.Comment: Accepted for JNL

    Macro actions for structures

    Get PDF
    It is not surprising that structures underly many of the problems that we find interesting in planning. However, the planners that we develop are not always capable of acting on them as they increase in size. For example, the errors caused through relaxations in a heuristic can grow quickly when acting on a structure. Macro actions can help to compensate for heuristic error; however, researchers have investigated finite length macro actions limiting the benefit when the underlying problem is an arbitrary sized structure. In this work we design a specific set of arbitrary length macros, providing a vocabulary for acting on structures

    Lexicalized semi-incremental dependency parsing

    Get PDF
    Even leaving aside concerns of cognitive plausibility, incremental parsing is appealing for applications such as speech recognition and machine translation because it could allow for incorporating syntactic features into the decoding process without blowing up the search space. Yet, incremental parsing is often associated with greedy parsing decisions and intolerable loss of accuracy. Would the use of lexicalized grammars provide a new perspective on incremental parsing? In this paper we explore incremental left-to-right dependency parsing using a lexicalized grammatical formalism that works with lexical categories (supertags) and a small set of combinatory operators. A strictly incremental parser would conduct only a single pass over the input, use no lookahead and make only local decisions at every word. We show that such a parser suffers heavy loss of accuracy. Instead, we explore the utility of a two-pass approach that incrementally builds a dependency structure by first assigning a supertag to every input word and then selecting an incremental operator that allows assembling every supertag with the dependency structure built so-far to its left. We instantiate this idea in different models that allow a trade-off between aspects of full incrementality and performance, and explore the differences between these models empirically. Our exploration shows that a semi-incremental (two-pass), linear-time parser that employs fixed and limited look-ahead exhibits an appealing balance between the efficiency advantages of incrementality and the achieved accuracy. Surprisingly, taking local or global decisions matters very little for the accuracy of this linear-time parser. Such a parser fits seemlessly with the currently dominant finite-state decoders for machine translation

    Perceptions of Physics Teachers in Singapore About Curriculum Sequencing

    Get PDF
    Curricular sequencing is central to instruction design and enactment. If carefully planned for, the order of topics to be taught would determine how the fundamentals of the discipline can be presented and introduced to learners, in a sequence that eases them into more complex ways of reasoning and thinking about the domain. These sequences or learning progressions often reflect the experts’ conceptual schemas of the discipline and are conceived as strategic models for instruction (Duschl et al., 2011). The varied findings from current research on learning progressions underscore the complexity of teaching and learning, and imply that even ‘well-crafted’, standards-based, and authorized learning progressions need to be understood, decoded and customized by educators in the field. Teachers need to interpret and enact these standards-endorsed learning progressions in ways that are appropriate for their own students. This qualitative study sought to understand and document what physics teachers in Singapore believe are conceptual themes that connect the concepts in Kinematics and Dynamics, the logic that underpins the transitions between topics, and the considerations as well as general strategies that teachers employ when planning a learning progression. The sample was purposive and comprised 22 teachers who taught physics at grades seven to twelve in Singapore schools. Data were obtained through in-depth, task-based interviews with the physics teachers. The study found that the teachers’ arrangement of the learning objectives were unique, implying that there is likely to be no standard learning progression across Singapore classrooms or across school types. From the teacher interviews, there were a total of ten pedagogical strategies and considerations (six that were concept-themed and four that were generic) that teachers may reflect on when planning a learning progression. These ten pedagogical practices offer teachers various permutations for curricular sequencing. The findings of the study suggest that teachers recognize the significance and consequence of learning progressions and conscientiously plan their presentations of the teaching unit. In the teachers’ daily teaching practice, the order of topics taught depends on contextual factors and their professional beliefs, and would thus depart from a standard formal sequence

    Avoiding Unnecessary Information Loss: Correct and Efficient Model Synchronization Based on Triple Graph Grammars

    Full text link
    Model synchronization, i.e., the task of restoring consistency between two interrelated models after a model change, is a challenging task. Triple Graph Grammars (TGGs) specify model consistency by means of rules that describe how to create consistent pairs of models. These rules can be used to automatically derive further rules, which describe how to propagate changes from one model to the other or how to change one model in such a way that propagation is guaranteed to be possible. Restricting model synchronization to these derived rules, however, may lead to unnecessary deletion and recreation of model elements during change propagation. This is inefficient and may cause unnecessary information loss, i.e., when deleted elements contain information that is not represented in the second model, this information cannot be recovered easily. Short-cut rules have recently been developed to avoid unnecessary information loss by reusing existing model elements. In this paper, we show how to automatically derive (short-cut) repair rules from short-cut rules to propagate changes such that information loss is avoided and model synchronization is accelerated. The key ingredients of our rule-based model synchronization process are these repair rules and an incremental pattern matcher informing about suitable applications of them. We prove the termination and the correctness of this synchronization process and discuss its completeness. As a proof of concept, we have implemented this synchronization process in eMoflon, a state-of-the-art model transformation tool with inherent support of bidirectionality. Our evaluation shows that repair processes based on (short-cut) repair rules have considerably decreased information loss and improved performance compared to former model synchronization processes based on TGGs.Comment: 33 pages, 20 figures, 3 table

    Lexicalised Locality: Local Domains and Non-Local Dependencies in a Lexicalised Tree Adjoining Grammar

    Get PDF
    Contemporary generative grammar assumes that syntactic structure is best described in terms of sets, and that locality conditions, as well as cross-linguistic variation, is determined at the level of designated functional heads. Syntactic operations (merge, MERGE, etc.) build a structure by deriving sets from lexical atoms and recursively (and monotonically) yielding sets of sets. Additional restrictions over the format of structural descriptions limit the number of elements involved in each operation to two at each derivational step, a head and a non-head. In this paper, we will explore an alternative direction for minimalist inquiry based on previous work, e.g., Frank (2002, 2006), albeit under novel assumptions. We propose a view of syntactic structure as a specification of relations in graphs, which correspond to the extended projection of lexical heads; these are elementary trees in Tree Adjoining Grammars. We present empirical motivation for a lexicalised approach to structure building, where the units of the grammar are elementary trees. Our proposal will be based on cross-linguistic evidence; we will consider the structure of elementary trees in Spanish, English and German. We will also explore the consequences of assuming that nodes in elementary trees are addresses for purposes of tree composition operations, substitution and adjunction
    • 

    corecore