737 research outputs found

    Exposing Multi-Relational Networks to Single-Relational Network Analysis Algorithms

    Full text link
    Many, if not most network analysis algorithms have been designed specifically for single-relational networks; that is, networks in which all edges are of the same type. For example, edges may either represent "friendship," "kinship," or "collaboration," but not all of them together. In contrast, a multi-relational network is a network with a heterogeneous set of edge labels which can represent relationships of various types in a single data structure. While multi-relational networks are more expressive in terms of the variety of relationships they can capture, there is a need for a general framework for transferring the many single-relational network analysis algorithms to the multi-relational domain. It is not sufficient to execute a single-relational network analysis algorithm on a multi-relational network by simply ignoring edge labels. This article presents an algebra for mapping multi-relational networks to single-relational networks, thereby exposing them to single-relational network analysis algorithms.Comment: ISSN:1751-157

    Centrality Measures in Spatial Networks of Urban Streets

    Full text link
    We study centrality in urban street patterns of different world cities represented as networks in geographical space. The results indicate that a spatial analysis based on a set of four centrality indices allows an extended visualization and characterization of the city structure. Planned and self-organized cities clearly belong to two different universality classes. In particular, self-organized cities exhibit scale-free properties similar to those found in the degree distributions of non-spatial networks.Comment: 4 pages, 3 figure

    Given-new effects on the duration of gestures and of words in face-to-face dialogue

    Get PDF
    The given-new contract entails that speakers must distinguish for their addressee whether references are new or already part of their dialogue. Past research had found that, in a monologue to a listener, speakers shortened repeated words. However, the notion of the given-new contract is inherently dialogic, with an addressee and the availability of co-speech gestures. Here, two face-to-face dialogue experiments tested whether gesture duration also follows the given-new contract. In Experiment 1, four experimental sequences confirmed that when speakers repeated their gestures, they shortened the duration significantly. Experiment 2 replicated the effect with spontaneous gestures in a different task. This experiment also extended earlier results with words, confirming that speakers shortened their repeated words significantly in a multimodal dialogue setting, the basic form of language use. Because words and gestures were not necessarily redundant, these results offer another instance in which gestures and words independently serve pragmatic requirements of dialogue

    The theoretical and research basis of co-constructing meaning in dialogue

    Full text link
    de Shazer (1991) introduced a post-structural view of language in therapy in which the participants\u27 sociai interaction determines the meaning of the words they are using. Broader theories of social construction are similar but lack details about the role of language. This article focuses on the observable details of co-constructing meaning in dialogue. Research in psycholinguistics has provided experimental ev­idence for how speakers and their addressees collaboratively co-construct their dialogues. We review several of the experiments that have demonstrated the in­fluence and importance of the addressee in shaping what the speaker is saying. Building on this research, we present a moment-by-moment three-step grounding sequence in which the speaker presents information, the addressee displays un­derstanding, and the speaker confirms this understanding. We propose that this micro-pattern and its variations are the observable process by which the partici­pants in a dialogue negotiate and co-construct shared meanings

    Grammar-Based Geodesics in Semantic Networks

    Full text link
    A geodesic is the shortest path between two vertices in a connected network. The geodesic is the kernel of various network metrics including radius, diameter, eccentricity, closeness, and betweenness. These metrics are the foundation of much network research and thus, have been studied extensively in the domain of single-relational networks (both in their directed and undirected forms). However, geodesics for single-relational networks do not translate directly to multi-relational, or semantic networks, where vertices are connected to one another by any number of edge labels. Here, a more sophisticated method for calculating a geodesic is necessary. This article presents a technique for calculating geodesics in semantic networks with a focus on semantic networks represented according to the Resource Description Framework (RDF). In this framework, a discrete "walker" utilizes an abstract path description called a grammar to determine which paths to include in its geodesic calculation. The grammar-based model forms a general framework for studying geodesic metrics in semantic networks.Comment: First draft written in 200

    Automatic Schaeffer's gestures recognition system

    Get PDF
    Schaeffer's sign language consists of a reduced set of gestures designed to help children with autism or cognitive learning disabilities to develop adequate communication skills. Our automatic recognition system for Schaeffer's gesture language uses the information provided by an RGB-D camera to capture body motion and recognize gestures using dynamic time warping combined with k-nearest neighbors methods. The learning process is reinforced by the interaction with the proposed system that accelerates learning itself thus helping both children and educators. To demonstrate the validity of the system, a set of qualitative experiments with children were carried out. As a result, a system which is able to recognize a subset of 11 gestures of Schaeffer's sign language online was achieved.This work has been supported by the Spanish Government DPI2013-40534-R Grant, supported with Feder funds

    Continuous Interaction with a Virtual Human

    Get PDF
    Attentive Speaking and Active Listening require that a Virtual Human be capable of simultaneous perception/interpretation and production of communicative behavior. A Virtual Human should be able to signal its attitude and attention while it is listening to its interaction partner, and be able to attend to its interaction partner while it is speaking – and modify its communicative behavior on-the-fly based on what it perceives from its partner. This report presents the results of a four week summer project that was part of eNTERFACE’10. The project resulted in progress on several aspects of continuous interaction such as scheduling and interrupting multimodal behavior, automatic classification of listener responses, generation of response eliciting behavior, and models for appropriate reactions to listener responses. A pilot user study was conducted with ten participants. In addition, the project yielded a number of deliverables that are released for public access

    The Social Climbing Game

    Full text link
    The structure of a society depends, to some extent, on the incentives of the individuals they are composed of. We study a stylized model of this interplay, that suggests that the more individuals aim at climbing the social hierarchy, the more society's hierarchy gets strong. Such a dependence is sharp, in the sense that a persistent hierarchical order emerges abruptly when the preference for social status gets larger than a threshold. This phase transition has its origin in the fact that the presence of a well defined hierarchy allows agents to climb it, thus reinforcing it, whereas in a "disordered" society it is harder for agents to find out whom they should connect to in order to become more central. Interestingly, a social order emerges when agents strive harder to climb society and it results in a state of reduced social mobility, as a consequence of ergodicity breaking, where climbing is more difficult.Comment: 14 pages, 9 figure

    Eye contact modulates facial mimicry in 4-month-old infants: an EMG and fNIRS study

    Get PDF
    Mimicry, the tendency to spontaneously and unconsciously copy others' behaviour, plays an important role in social interactions. It facilitates rapport between strangers, and is flexibly modulated by social signals, such as eye contact. However, little is known about the development of this phenomenon in infancy, and it is unknown whether mimicry is modulated by social signals from early in life. Here we addressed this question by presenting 4-month-old infants with videos of models performing facial actions (e.g., mouth opening, eyebrow raising) and hand actions (e.g., hand opening and closing, finger actions) accompanied by direct or averted gaze, while we measured their facial and hand muscle responses using electromyography to obtain an index of mimicry (Experiment 1). In Experiment 2 the infants observed the same stimuli while we used functional near-infrared spectroscopy to investigate the brain regions involved in modulating mimicry by eye contact. We found that 4-month-olds only showed evidence of mimicry when they observed facial actions accompanied by direct gaze. Experiment 2 suggests that this selective facial mimicry may have been associated with activation over posterior superior temporal sulcus. These findings provide the first demonstration of modulation of mimicry by social signals in young human infants, and suggest that mimicry plays an important role in social interactions from early in life. [Abstract copyright: Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.
    corecore