1,270 research outputs found

    Diatopic variation in digital space: What Twitter can tell us about Texas English dialect areas

    Get PDF
    The availability of large amounts of social media text offers tremendous potential for studies of diatopic variation. A case in point is the linguistic geography of Texas, which is at present insufficiently described in traditional dialectological research. This paper summarises previous work on diatopic variation in Texas English on the basis of Twitter and presents an approach that foregrounds functional interpretability over a maximally clear geographical signal. In a multi-dimensional analysis based on 45 linguistic features in over 3 million tweets from across the state, two dimensions of variation are identified that pattern in geographically meaningful ways. The first of these relates to creative uses of typography and distinguishes urban centres from the rest of the state. The second dimension encompasses characteristics of interpersonal, spoken discourse and shows an East-West geographical divide. While the linguistic features of relevance for the dimensions are not generally considered in dialectological research, their geographic patterning reflects major tendencies attested in the literature on diatopic variation in Texas.[1]   [1]I am grateful to Alex Rosenfeld for sharing his data with me. This work was initially presented at a panel on Twitter in sociolinguistic research at NWAV 49, organised by Stef Grondelaers and Jane Stuart-Smith. I would like to thank both of them for giving me this opportunity and the attendees of the panel, especially Lars Hinrichs and Alex Rosenfeld, for fruitful discussion. Finally, my gratitude goes to Erling Strudsholm and Anita Berit Hansen for their invitation to participate on the Coseriu Symposium and their patience in organising this special issue

    Modelling Digital Media Objects

    Get PDF

    Learning semantic representations through multimodal graph neural networks

    Get PDF
    Proyecto de Graduación (Licenciatura en Ingeniería Mecatrónica) Instituto Tecnológico de Costa Rica. Área Académica de Ingeniería Mecatrónica, 2021Para proporcionar del conocimiento semántico sobre los objetos con los que van a interactuar los sistemas robóticos, se debe abordar el problema del aprendizaje de las representaciones semánticas a partir de las modalidades del lenguaje y la visión. El conocimiento semántico se refiere a la información conceptual, incluida la información semántica (significado) y léxica (palabra), y que proporciona la base para muchos de nuestros comportamientos no verbales cotidianos. Por lo tanto, es necesario desarrollar métodos que permitan a los robots procesar oraciones en un entorno del mundo real, por lo que este proyecto presenta un enfoque novedoso que utiliza Redes Convolucionales Gráficas para aprender representaciones de palabras basadas en el significado. El modelo propuesto consta de una primera capa que codifica representaciones unimodales y una segunda capa que integra estas representaciones unimodales en una para aprender una representación desde ambas modalidades. Los resultados experimentales muestran que el modelo propuesto supera al estado del arte en similitud semántica y que tiene la capacidad de simular juicios de similitud humana. Hasta donde sabemos, este enfoque es novedoso en el uso de Redes Convolucionales Gráficas para mejorar la calidad de las representaciones de palabras.To provide semantic knowledge about the objects that robotic systems are going to interact with, you must address the problem of learning semantic representations from modalities of language and vision. Semantic knowledge refers to conceptual information, including semantic (meaning) and lexical (word) information, and that provides the basis for many of our everyday non-verbal behaviors. Therefore, it is necessary to develop methods that enable robots to process sentences in a real-world environment, so this project introduces a novel approach that uses Graph Convolutional Networks to learn grounded meaning representations of words. The proposed model consists of a first layer that encodes unimodal representations, and a second layer that integrates these unimodal representations into one to learn a representation from both modalities. Experimental results show that the proposed model outperforms that state-of-the-art in semantic similarity and that can simulate human similarity judgments. To the best of our knowledge, this approach is novel in its use of Graph Convolutional Networks to enhance the quality of word representations

    Computational explorations of semantic cognition

    Get PDF
    Motivated by the widespread use of distributional models of semantics within the cognitive science community, we follow a computational modelling approach in order to better understand and expand the applicability of such models, as well as to test potential ways in which they can be improved and extended. We review evidence in favour of the assumption that distributional models capture important aspects of semantic cognition. We look at the models’ ability to account for behavioural data and fMRI patterns of brain activity, and investigate the structure of model-based, semantic networks. We test whether introducing affective information, obtained from a neural network model designed to predict emojis from co-occurring text, can improve the performance of linguistic and linguistic-visual models of semantics, in accounting for similarity/relatedness ratings. We find that adding visual and affective representations improves performance, especially for concrete and abstract words, respectively. We describe a processing model based on distributional semantics, in which activation spreads throughout a semantic network, as dictated by the patterns of semantic similarity between words. We show that the activation profile of the network, measured at various time points, can account for response time and accuracies in lexical and semantic decision tasks, as well as for concreteness/imageability and similarity/relatedness ratings. We evaluate the differences between concrete and abstract words, in terms of the structure of the semantic networks derived from distributional models of semantics. We examine how the structure is related to a number of factors that have been argued to differ between concrete and abstract words, namely imageability, age of acquisition, hedonic valence, contextual diversity, and semantic diversity. We use distributional models to explore factors that might be responsible for the poor linguistic performance of children suffering from Developmental Language Disorder. Based on the assumption that certain model parameters can be given a psychological interpretation, we start from “healthy” models, and generate “lesioned” models, by manipulating the parameters. This allows us to determine the importance of each factor, and their effects with respect to learning concrete vs abstract words

    Multimodal Prediction based on Graph Representations

    Full text link
    This paper proposes a learning model, based on rank-fusion graphs, for general applicability in multimodal prediction tasks, such as multimodal regression and image classification. Rank-fusion graphs encode information from multiple descriptors and retrieval models, thus being able to capture underlying relationships between modalities, samples, and the collection itself. The solution is based on the encoding of multiple ranks for a query (or test sample), defined according to different criteria, into a graph. Later, we project the generated graph into an induced vector space, creating fusion vectors, targeting broader generality and efficiency. A fusion vector estimator is then built to infer whether a multimodal input object refers to a class or not. Our method is capable of promoting a fusion model better than early-fusion and late-fusion alternatives. Performed experiments in the context of multiple multimodal and visual datasets, as well as several descriptors and retrieval models, demonstrate that our learning model is highly effective for different prediction scenarios involving visual, textual, and multimodal features, yielding better effectiveness than state-of-the-art methods
    corecore