49,281 research outputs found

    Syntactic structure and artificial grammar learning : The learnability of embedded hierarchical structures

    Get PDF
    Embedded hierarchical structures, such as ‘‘the rat the cat ate was brown’’, constitute a core generative property of a natural language theory. Several recent studies have reported learning of hierarchical embeddings in artificial grammar learning (AGL) tasks, and described the functional specificity of Broca’s area for processing such structures. In two experiments, we investigated whether alternative strategies can explain the learning success in these studies. We trained participants on hierarchical sequences, and found no evidence for the learning of hierarchical embeddings in test situations identical to those from other studies in the literature. Instead, participants appeared to solve the task by exploiting surface distinctions between legal and illegal sequences, and applying strategies such as counting or repetition detection. We suggest alternative interpretations for the observed activation of Broca’s area, in terms of the application of calculation rules or of a differential role of working memory. We claim that the learnability of hierarchical embeddings in AGL tasks remains to be demonstrated

    An Architectural Approach to Ensuring Consistency in Hierarchical Execution

    Full text link
    Hierarchical task decomposition is a method used in many agent systems to organize agent knowledge. This work shows how the combination of a hierarchy and persistent assertions of knowledge can lead to difficulty in maintaining logical consistency in asserted knowledge. We explore the problematic consequences of persistent assumptions in the reasoning process and introduce novel potential solutions. Having implemented one of the possible solutions, Dynamic Hierarchical Justification, its effectiveness is demonstrated with an empirical analysis

    The impact of adjacent-dependencies and staged-input on the learnability of center-embedded hierarchical structures

    Get PDF
    A theoretical debate in artificial grammar learning (AGL) regards the learnability of hierarchical structures. Recent studies using an AnBn grammar draw conflicting conclusions (Bahlmann and Friederici, 2006, De Vries et al., 2008). We argue that 2 conditions crucially affect learning AnBn structures: sufficient exposure to zero-level-of-embedding (0-LoE) exemplars and a staged-input. In 2 AGL experiments, learning was observed only when the training set was staged and contained 0-LoE exemplars. Our results might help understanding how natural complex structures are learned from exemplars

    XML document design via GN-DTD

    Get PDF
    Designing a well-structured XML document is important for the sake of readability and maintainability. More importantly, this will avoid data redundancies and update anomalies when maintaining a large quantity of XML based documents. In this paper, we propose a method to improve XML structural design by adopting graphical notations for Document Type Definitions (GN-DTD), which is used to describe the structure of an XML document at the schema level. Multiples levels of normal forms for GN-DTD are proposed on the basis of conceptual model approaches and theories of normalization. The normalization rules are applied to transform a poorly designed XML document into a well-designed based on normalized GN-DTD, which is illustrated through examples

    Joint Video and Text Parsing for Understanding Events and Answering Queries

    Full text link
    We propose a framework for parsing video and text jointly for understanding events and answering user queries. Our framework produces a parse graph that represents the compositional structures of spatial information (objects and scenes), temporal information (actions and events) and causal information (causalities between events and fluents) in the video and text. The knowledge representation of our framework is based on a spatial-temporal-causal And-Or graph (S/T/C-AOG), which jointly models possible hierarchical compositions of objects, scenes and events as well as their interactions and mutual contexts, and specifies the prior probabilistic distribution of the parse graphs. We present a probabilistic generative model for joint parsing that captures the relations between the input video/text, their corresponding parse graphs and the joint parse graph. Based on the probabilistic model, we propose a joint parsing system consisting of three modules: video parsing, text parsing and joint inference. Video parsing and text parsing produce two parse graphs from the input video and text respectively. The joint inference module produces a joint parse graph by performing matching, deduction and revision on the video and text parse graphs. The proposed framework has the following objectives: Firstly, we aim at deep semantic parsing of video and text that goes beyond the traditional bag-of-words approaches; Secondly, we perform parsing and reasoning across the spatial, temporal and causal dimensions based on the joint S/T/C-AOG representation; Thirdly, we show that deep joint parsing facilitates subsequent applications such as generating narrative text descriptions and answering queries in the forms of who, what, when, where and why. We empirically evaluated our system based on comparison against ground-truth as well as accuracy of query answering and obtained satisfactory results

    Implicit learning of recursive context-free grammars

    Get PDF
    Context-free grammars are fundamental for the description of linguistic syntax. However, most artificial grammar learning experiments have explored learning of simpler finite-state grammars, while studies exploring context-free grammars have not assessed awareness and implicitness. This paper explores the implicit learning of context-free grammars employing features of hierarchical organization, recursive embedding and long-distance dependencies. The grammars also featured the distinction between left- and right-branching structures, as well as between centre- and tail-embedding, both distinctions found in natural languages. People acquired unconscious knowledge of relations between grammatical classes even for dependencies over long distances, in ways that went beyond learning simpler relations (e.g. n-grams) between individual words. The structural distinctions drawn from linguistics also proved important as performance was greater for tail-embedding than centre-embedding structures. The results suggest the plausibility of implicit learning of complex context-free structures, which model some features of natural languages. They support the relevance of artificial grammar learning for probing mechanisms of language learning and challenge existing theories and computational models of implicit learning

    Risk aggregation, dependence structure and diversification benefit

    Get PDF
    Insurance and reinsurance live and die from the diversification benefits or lack of it in their risk portfolio. The new solvency regulations allow companies to include them in their computation of risk-based capital (RBC). The question is how to really evaluate those benefits. To compute the total risk of a portfolio, it is important to establish the rules for aggregating the various risks that compose it. This can only be done through modelling of their dependence. It is a well known fact among traders in financial markets that "diversification works the worst when one needs it the most''. In other words, in times of crisis the dependence between risks increases. Experience has shown that very large loss events almost always affect multiple lines of business simultaneously. September 11, 2001, is an example of such an event: when the claims originated simultaneously from lines of business which are usually uncorrelated, such as property and life, at the same time that the assets of the company were depreciated due to the crisis on the stock markets. In this paper, we explore various methods of modelling dependence and their influence on diversification benefits. We show that the latter strongly depend on the chosen method and that rank correlation grossly overestimates diversification. This has consequences on the RBC for the whole portfolio, which is smaller than it should be when correctly accounting for tail correlation. However, the problem remains to calibrate the dependence for extreme events, which are rare by definition. We analyze and propose possible ways to get out of this dilemma and come up with reasonable estimates.Risk-Based Capital, Hierarchical Copula, Dependence, Calibration
    corecore