217,117 research outputs found

    At the foot of the grave: challenging collective memories of violence in post-franco Spain

    Get PDF
    Understanding the development and meaning of collective memory is a central interest for sociologists. One aspect of this literature focuses on the processes that social movement actors use to introduce long-silenced counter-memories of violence to supplant the “official” memory. To examine this, I draw on 15 months of ethnographic observations with the Spanish Association for the Recovery of Historical Memory (ARMH) and 200 informal and 30 formal interviews with locals and activists. This paper demonstrates that ARMH activists, during forensic classes given at mass grave exhumations, use multiple tactics (depoliticized science framing, action-oriented objects, and embodiment) to deliver a counter-memory of the Spanish Civil War and Franco regime and make moral and transitional justice claims. This research shows how victims’ remains and the personal objects found in the graves also provoke the desired meaning that emotionally connects those listening to the classes to the victims and the ARMH’s counter-memory

    A conceptual model for the development of CSCW systems

    Get PDF
    Models and theories concerning cooperation have long been recognised as an important aid in the development of Computer Supported Cooperative Work (CSCW) systems. However, there is no consensus regarding the set of concepts and abstractions that should underlie such models and theories. Furthermore, common patterns are hard to discern in different models and theories. This paper analyses a number of existing models and theories, and proposes a generic conceptual framework based on the strengths and commonalities of these models. We analyse five different developments, viz., Coordination Theory, Activity Theory, Task Manager model, Action/Interaction Theory and Object-Oriented Activity Support model, to propose a generic model based on four key concepts common to these developments, viz. activity, actor, information and service

    AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual Actions

    Get PDF
    This paper introduces a video dataset of spatio-temporally localized Atomic Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual actions in 430 15-minute video clips, where actions are localized in space and time, resulting in 1.58M action labels with multiple labels per person occurring frequently. The key characteristics of our dataset are: (1) the definition of atomic visual actions, rather than composite actions; (2) precise spatio-temporal annotations with possibly multiple annotations for each person; (3) exhaustive annotation of these atomic actions over 15-minute video clips; (4) people temporally linked across consecutive segments; and (5) using movies to gather a varied set of action representations. This departs from existing datasets for spatio-temporal action recognition, which typically provide sparse annotations for composite actions in short video clips. We will release the dataset publicly. AVA, with its realistic scene and action complexity, exposes the intrinsic difficulty of action recognition. To benchmark this, we present a novel approach for action localization that builds upon the current state-of-the-art methods, and demonstrates better performance on JHMDB and UCF101-24 categories. While setting a new state of the art on existing datasets, the overall results on AVA are low at 15.6% mAP, underscoring the need for developing new approaches for video understanding.Comment: To appear in CVPR 2018. Check dataset page https://research.google.com/ava/ for detail

    Action Tubelet Detector for Spatio-Temporal Action Localization

    Full text link
    Current state-of-the-art approaches for spatio-temporal action localization rely on detections at the frame level that are then linked or tracked across time. In this paper, we leverage the temporal continuity of videos instead of operating at the frame level. We propose the ACtion Tubelet detector (ACT-detector) that takes as input a sequence of frames and outputs tubelets, i.e., sequences of bounding boxes with associated scores. The same way state-of-the-art object detectors rely on anchor boxes, our ACT-detector is based on anchor cuboids. We build upon the SSD framework. Convolutional features are extracted for each frame, while scores and regressions are based on the temporal stacking of these features, thus exploiting information from a sequence. Our experimental results show that leveraging sequences of frames significantly improves detection performance over using individual frames. The gain of our tubelet detector can be explained by both more accurate scores and more precise localization. Our ACT-detector outperforms the state-of-the-art methods for frame-mAP and video-mAP on the J-HMDB and UCF-101 datasets, in particular at high overlap thresholds.Comment: 9 page
    • 

    corecore