1,046 research outputs found

    A qualitative enquiry into OpenStreetMap making

    Get PDF
    Based on a case study on the OpenStreetMap community, this paper provides a contextual and embodied understanding of the user-led, user-participatory and user-generated produsage phenomenon. It employs Grounded Theory, Social Worlds Theory, and qualitative methods to illuminate and explores the produsage processes of OpenStreetMap making, and how knowledge artefacts such as maps can be collectively and collaboratively produced by a community of people, who are situated in different places around the world but engaged with the same repertoire of mapping practices. The empirical data illustrate that OpenStreetMap itself acts as a boundary object that enables actors from different social worlds to co-produce the Map through interacting with each other and negotiating the meanings of mapping, the mapping data and the Map itself. The discourses also show that unlike traditional maps that black-box cartographic knowledge and offer a single dominant perspective of cities or places, OpenStreetMap is an embodied epistemic object that embraces different world views. The paper also explores how contributors build their identities as an OpenStreetMaper alongside some other identities they have. Understanding the identity-building process helps to understand mapping as an embodied activity with emotional, cognitive and social repertoires

    Learning joint feature adaptation for zero-shot recognition

    Full text link
    Zero-shot recognition (ZSR) aims to recognize target-domain data instances of unseen classes based on the models learned from associated pairs of seen-class source and target domain data. One of the key challenges in ZSR is the relative scarcity of source-domain features (e.g. one feature vector per class), which do not fully account for wide variability in target-domain instances. In this paper we propose a novel framework of learning data-dependent feature transforms for scoring similarity between an arbitrary pair of source and target data instances to account for the wide variability in target domain. Our proposed approach is based on optimizing over a parameterized family of local feature displacements that maximize the source-target adaptive similarity functions. Accordingly we propose formulating zero-shot learning (ZSL) using latent structural SVMs to learn our similarity functions from training data. As demonstration we design a specific algorithm under the proposed framework involving bilinear similarity functions and regularized least squares as penalties for feature displacement. We test our approach on several benchmark datasets for ZSR and show significant improvement over the state-of-the-art. For instance, on aP&Y dataset we can achieve 80.89% in terms of recognition accuracy, outperforming the state-of-the-art by 11.15%

    WELL: Applying Bug Detectors to Bug Localization via Weakly Supervised Learning

    Full text link
    Bug localization is a key software development task, where a developer locates the portion of the source code that must be modified based on the bug report. It is label-intensive and time-consuming due to the increasing size and complexity of the modern software. Effectively automating this task can greatly reduce costs by cutting down the developers' effort. Researchers have already made efforts to harness the great powerfulness of deep learning (DL) to automate bug localization. However, training DL models demands a large quantity of annotated training data, while the buggy-location-annotated dataset with reasonable quality and quantity is difficult to collect. This becomes an obstacle to the effective usage of DL for bug localization. We notice that the data pairs for bug detection, which provide weak buggy-or-not binary classification supervision, are much easier to obtain. Inspired by weakly supervised learning, this paper proposes WEakly supervised bug LocaLization (WELL), an approach to transform bug detectors to bug locators. Through the CodeBERT model finetuned by bug detection, WELL is capable to locate bugs in a weakly supervised manner based on the attention. The evaluations on three datasets of WELL show competitive performance with the existing strongly supervised DL solutions. WELL even outperforms current SOTA models in tasks of variable misuse and binary operator misuse.Comment: (Preprint) Software Engineer; Deep Learning; Bug Detection & Localizatio

    Simple Model Also Works: A Novel Emotion Recognition Network in Textual Conversation Based on Curriculum Learning Strategy

    Full text link
    Emotion Recognition in Conversation (ERC) has emerged as a research hotspot in domains such as conversational robots and question-answer systems. How to efficiently and adequately retrieve contextual emotional cues has been one of the key challenges in the ERC task. Existing efforts do not fully model the context and employ complex network structures, resulting in excessive computational resource overhead without substantial performance improvement. In this paper, we propose a novel Emotion Recognition Network based on Curriculum Learning strategy (ERNetCL). The proposed ERNetCL primarily consists of Temporal Encoder (TE), Spatial Encoder (SE), and Curriculum Learning (CL) loss. We utilize TE and SE to combine the strengths of previous methods in a simplistic manner to efficiently capture temporal and spatial contextual information in the conversation. To simulate the way humans learn curriculum from easy to hard, we apply the idea of CL to the ERC task to progressively optimize the network parameters of ERNetCL. At the beginning of training, we assign lower learning weights to difficult samples. As the epoch increases, the learning weights for these samples are gradually raised. Extensive experiments on four datasets exhibit that our proposed method is effective and dramatically beats other baseline models.Comment: 12 pages,9 figure
    • …
    corecore