240 research outputs found

    Happiness

    No full text

    50 Salads

    Get PDF
    50 Salads Activity recognition research has shifted focus from distinguishing full-body motion patterns to recognizing complex interactions of multiple entities. Manipulative gestures - characterized by interactions between hands, tools, and manipulable objects - frequently occur in food preparation, manufacturing, and assembly tasks, and have a variety of applications including situational support, automated supervision, and skill assessment. With the aim to stimulate research on recognizing manipulative gestures we introduce the 50 Salads dataset. It captures 25 people preparing 2 mixed salads each and contains over 4h of annotated accelerometer and RGB-D video data. Including detailed annotations, multiple sensor types, and two sequences per participant, the 50 Salads dataset may be used for research in areas such as activity recognition, activity spotting, sequence analysis, progress tracking, sensor fusion, transfer learning, and user-adaptation. This data is made available under a CC-BY-NC-SA creative commons license https://creativecommons.org/licenses/by-nc-sa/4.0/legalcod

    Changing users' security behaviour towards security questions: A game based learning approach

    Full text link
    Fallback authentication is used to retrieve forgotten passwords. Security questions are one of the main techniques used to conduct fallback authentication. In this paper, we propose a serious game design that uses system-generated security questions with the aim of improving the usability of fallback authentication. For this purpose, we adopted the popular picture-based "4 Pics 1 word" mobile game. This game was selected because of its use of pictures and cues, which previous psychology research found to be crucial to aid memorability. This game asks users to pick the word that relates to the given pictures. We then customized this game by adding features which help maximize the following memory retrieval skills: (a) verbal cues - by providing hints with verbal descriptions, (b) spatial cues - by maintaining the same order of pictures, (c) graphical cues - by showing 4 images for each challenge, (d) interactivity/engaging nature of the game.Comment: 6, Military Communications and Information Systems Conference (MilCIS), 2017. arXiv admin note: substantial text overlap with arXiv:1707.0807

    Towards Structured Analysis of Broadcast Badminton Videos

    Full text link
    Sports video data is recorded for nearly every major tournament but remains archived and inaccessible to large scale data mining and analytics. It can only be viewed sequentially or manually tagged with higher-level labels which is time consuming and prone to errors. In this work, we propose an end-to-end framework for automatic attributes tagging and analysis of sport videos. We use commonly available broadcast videos of matches and, unlike previous approaches, does not rely on special camera setups or additional sensors. Our focus is on Badminton as the sport of interest. We propose a method to analyze a large corpus of badminton broadcast videos by segmenting the points played, tracking and recognizing the players in each point and annotating their respective badminton strokes. We evaluate the performance on 10 Olympic matches with 20 players and achieved 95.44% point segmentation accuracy, 97.38% player detection score ([email protected]), 97.98% player identification accuracy, and stroke segmentation edit scores of 80.48%. We further show that the automatically annotated videos alone could enable the gameplay analysis and inference by computing understandable metrics such as player's reaction time, speed, and footwork around the court, etc.Comment: 9 page

    Segmental Spatiotemporal CNNs for Fine-grained Action Segmentation

    Full text link
    Joint segmentation and classification of fine-grained actions is important for applications of human-robot interaction, video surveillance, and human skill evaluation. However, despite substantial recent progress in large-scale action classification, the performance of state-of-the-art fine-grained action recognition approaches remains low. We propose a model for action segmentation which combines low-level spatiotemporal features with a high-level segmental classifier. Our spatiotemporal CNN is comprised of a spatial component that uses convolutional filters to capture information about objects and their relationships, and a temporal component that uses large 1D convolutional filters to capture information about how object relationships change across time. These features are used in tandem with a semi-Markov model that models transitions from one action to another. We introduce an efficient constrained segmental inference algorithm for this model that is orders of magnitude faster than the current approach. We highlight the effectiveness of our Segmental Spatiotemporal CNN on cooking and surgical action datasets for which we observe substantially improved performance relative to recent baseline methods.Comment: Updated from the ECCV 2016 version. We fixed an important mathematical error and made the section on segmental inference cleare
    • …
    corecore