1,627 research outputs found

    Efficient Triangle Counting in Large Graphs via Degree-based Vertex Partitioning

    Full text link
    The number of triangles is a computationally expensive graph statistic which is frequently used in complex network analysis (e.g., transitivity ratio), in various random graph models (e.g., exponential random graph model) and in important real world applications such as spam detection, uncovering of the hidden thematic structure of the Web and link recommendation. Counting triangles in graphs with millions and billions of edges requires algorithms which run fast, use small amount of space, provide accurate estimates of the number of triangles and preferably are parallelizable. In this paper we present an efficient triangle counting algorithm which can be adapted to the semistreaming model. The key idea of our algorithm is to combine the sampling algorithm of Tsourakakis et al. and the partitioning of the set of vertices into a high degree and a low degree subset respectively as in the Alon, Yuster and Zwick work treating each set appropriately. We obtain a running time O(m+m3/2Δlog⁥ntÏ”2)O \left(m + \frac{m^{3/2} \Delta \log{n}}{t \epsilon^2} \right) and an Ï”\epsilon approximation (multiplicative error), where nn is the number of vertices, mm the number of edges and Δ\Delta the maximum number of triangles an edge is contained. Furthermore, we show how this algorithm can be adapted to the semistreaming model with space usage O(m1/2log⁥n+m3/2Δlog⁥ntÏ”2)O\left(m^{1/2}\log{n} + \frac{m^{3/2} \Delta \log{n}}{t \epsilon^2} \right) and a constant number of passes (three) over the graph stream. We apply our methods in various networks with several millions of edges and we obtain excellent results. Finally, we propose a random projection based method for triangle counting and provide a sufficient condition to obtain an estimate with low variance.Comment: 1) 12 pages 2) To appear in the 7th Workshop on Algorithms and Models for the Web Graph (WAW 2010

    Ontology in Coq for a Guided Message Composition

    No full text
    International audienceNatural language generation is based on messages that represent meanings , and goals that are the usual starting points for communicate. How to help people to provide this conceptual input or, in other words, how to communicate thoughts to the computer? In order to express something, one needs to have something to express as an idea, a thought or a concept. The question is how to represent this. In 2009, Michael Zock, Paul Sabatier and Line Jakubiec-Jamet suggested the building of a resource composed of a linguistically motivated ontology, a dictionary and a graph generator. The ontology guides the user to choose among a set of concepts (or words) to build the message from; the dictionary provides knowledge of how to link the chosen elements to yield a message (compositional rules); the graph generator displays the output in visual form (message graph representing the user's input). While the goal of the ontology is to generate (or analyse) sentences and to guide message composition (what to say), the graph's function is to show at an intermediate level the result of the encoding process. The Illico system already proposes a way to help a user for generating (or analyzing) sentences and guiding their composition. Another system, the Drill Tutor, is an exercise generator whose goal is to help people to become fluent in a foreign language. It helps people (users have to make choices from the interface in order to build their messages) to produce a sentence expressing a message from an idea (or a concept) to its linguistic realization (or a correct sentence given in a foreign language). These two systems led us to consider the representation of the conceptual information into a symbolic language; this representation is encoded in a logic system in order to automatically check conceptual well-formedness of messages. This logic system is the Coq system used here only for its high level language. Coq is based on a typed λ-calculus. It is used for analysing conceptual input interpreted as types and also for specifying general definitions representing messages. These definitions are typed and they will be instanciated for type-checking the conceptual well-formedness of messages. 2 Line Jakubiec-Jame

    Self-organising Thermoregulatory Huddling in a Model of Soft Deformable Littermates

    Get PDF
    Thermoregulatory huddling behaviours dominate the early experiences of developing rodents, and constrain the patterns of sensory and motor input that drive neural plasticity. Huddling is a complex emergent group behaviour, thought to provide an early template for the development of adult social systems, and to constrain natural selection on metabolic physiology. However, huddling behaviours are governed by simple rules of interaction between individuals, which can be described in terms of the thermodynamics of heat exchange, and can be easily controlled by manipulation of the environment temperature. Thermoregulatory huddling thus provides an opportunity to investigate the effects of early experience on brain development in a social, developmental, and evolutionary context, through controlled experimentation. This paper demonstrates that thermoregulatory huddling behaviours can self-organise in a simulation of rodent littermates modelled as soft-deformable bodies that exchange heat during contact. The paper presents a novel methodology, based on techniques in computer animation, for simulating the early sensory and motor experiences of the developing rodent

    Conceptual dependency as the language of thought

    Full text link
    Roger Schank's research in AI takes seriously the ideas that understanding natural language involves mapping its expressions into an internal representation scheme and that these internal representations have a syntax appropriate for computational operations. It therefore falls within the computational approach to the study of mind. This paper discusses certain aspects of Schank's approach in order to assess its potential adequacy as a (partial) model of cognition. This version of the Language of Thought hypothesis encounters some of the same difficulties that arise for Fodor's account.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/43836/1/11229_2004_Article_BF00413665.pd

    Toward a script theory of guidance in computer-supported collaborative learning

    Get PDF
    This article presents an outline of a script theory of guidance for computer-supported collaborative learning (CSCL). With its four types of components of internal and external scripts (play, scene, role, and scriptlet) and seven principles, this theory addresses the question how CSCL practices are shaped by dynamically re-configured internal collaboration scripts of the participating learners. Furthermore, it explains how internal collaboration scripts develop through participation in CSCL practices. It emphasizes the importance of active application of subject matter knowledge in CSCL practices, and it prioritizes transactive over non-transactive forms of knowledge application in order to facilitate learning. Further, the theory explains how external collaboration scripts modify CSCL practices and how they influence the development of internal collaboration scripts. The principles specify an optimal scaffolding level for external collaboration scripts and allow for the formulation of hypotheses about the fading of external collaboration scripts. Finally, the article points towards conceptual challenges and future research questions

    Agents’ interaction in virtual storytelling

    Get PDF
    In this paper we describe a fully implemented prototype for interactive storytelling using the Unreal engine. Using a sit-com like scenario as an example of how the dynamic interactions between agents and/or the user dramatise the emerging story. Hierarchical Task Networks (HTNs) are formalised using AND/OR graphs, which are used to describe the many possible variations of the story at a sub-goal level, and the set of all behaviours (from a narrative perspective) of the primary actors at a terminal action level. We introduc

    Toward a Theory of the Evolution of Fair Play

    Get PDF
    Juvenile animals of many species engage in social play, but its functional significance is not well understood. This is especially true for a type of social play called fair play (Fp). Social play often involves behavioral patterns similar to adult behaviors (e.g., fighting, mating, and predatory activities), but young animals often engage in Fp behaviors such as role-reversals and self-handicapping, which raises the evolutionary problem of why Fp exists. A long-held working hypothesis, tracing back to the 19th century, is that social play provides contexts in which adult social skills needed for adulthood can be learned or, at least, refined. On this hypothesis, Fp may have evolved for adults to acquire skills for behaving fairly in the sense of equitable distribution of resources or treatment of others. We investigated the evolution of Fp using an evolutionary agent-based model of populations of social agents that learn adult fair behavior (Fb) by engaging in Fp as juveniles. In our model, adults produce offspring by accumulating resources over time through foraging. Adults can either behave selfishly by keeping the resources they forage or they can pool them, subsequently dividing the pooled resources after each round of foraging. We found that fairness as equitability was beneficial especially when resources were large but difficult to obtain and led to the evolution of Fp. We conclude by discussing the implications of this model, for developing more rigorous theory on the evolution of social play, and future directions for theory development by modeling the evolution of play

    Firms' Main Market, Human Capital and Wages

    Get PDF
    Recent international trade literature emphasizes two features in characterizing the current patterns of trade: efficiency heterogeneity at the firm level and quality differentiation. This paper explores human capital and wage differences across firms in that context. We build a partial equilibrium model predicting that firms selling in more-remote markets employ higher human capital and pay higher wages to employees within each education group. The channel linking these variables is firms’ endogenous choice of quality. Predictions are tested using Spanish employer-employee matched data that classify firms according to four main destination markets: local, national, European Union, and rest of the World. Employees’ average education is increasing in the remoteness of firm’s main output market. Market–destination wage premia are large, increasing in the remoteness of the market, and increasing in individual education. These results suggest that increasing globalization may play a significant role in raising wage inequality within and across education groups

    Task analysis for error identification: Theory, method and validation

    Get PDF
    This paper presents the underlying theory of Task Analysis for Error Identification. The aim is to illustrate the development of a method that has been proposed for the evaluation of prototypical designs from the perspective of predicting human error. The paper presents the method applied to representative examples. The methodology is considered in terms of the various validation studies that have been conducted, and is discussed in the light of a specific case study
    • 

    corecore