4,055 research outputs found

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio

    Towards Semantic Fast-Forward and Stabilized Egocentric Videos

    Full text link
    The emergence of low-cost personal mobiles devices and wearable cameras and the increasing storage capacity of video-sharing websites have pushed forward a growing interest towards first-person videos. Since most of the recorded videos compose long-running streams with unedited content, they are tedious and unpleasant to watch. The fast-forward state-of-the-art methods are facing challenges of balancing the smoothness of the video and the emphasis in the relevant frames given a speed-up rate. In this work, we present a methodology capable of summarizing and stabilizing egocentric videos by extracting the semantic information from the frames. This paper also describes a dataset collection with several semantically labeled videos and introduces a new smoothness evaluation metric for egocentric videos that is used to test our method.Comment: Accepted for publication and presented in the First International Workshop on Egocentric Perception, Interaction and Computing at European Conference on Computer Vision (EPIC@ECCV) 201

    Summarizing First-Person Videos from Third Persons' Points of Views

    Full text link
    Video highlight or summarization is among interesting topics in computer vision, which benefits a variety of applications like viewing, searching, or storage. However, most existing studies rely on training data of third-person videos, which cannot easily generalize to highlight the first-person ones. With the goal of deriving an effective model to summarize first-person videos, we propose a novel deep neural network architecture for describing and discriminating vital spatiotemporal information across videos with different points of view. Our proposed model is realized in a semi-supervised setting, in which fully annotated third-person videos, unlabeled first-person videos, and a small number of annotated first-person ones are presented during training. In our experiments, qualitative and quantitative evaluations on both benchmarks and our collected first-person video datasets are presented.Comment: 16+10 pages, ECCV 201

    What do people want from their lifelogs?

    Get PDF
    The practice of lifelogging potentially consists of automatically capturing and storing a digital record of every piece of information that a person (lifelogger) encounters in their daily experiences. Lifelogging has become an increasingly popular area of research in recent years. Most current lifeloggiing research focuses on techniques for data capture or processing. Current applications of lifelogging technology are usually driven by new technology inventions, creative ideas of researchers, or the special needs of a particular user group, e.g. individuals with memory impairment. To the best of our knowledge, little work has explored potential lifelogs applications from the perspective of the desires of the general public. One of the difficulties of carrying out such a study is the balancing of the information given to the subject regarding lifelog technology to enable them to generate realistic ideas without limiting or directing their imaginations by providing too much specific information. We report a study in which we take a progressive approach where we introduce lifelogging in three stages, and collect the ideas and opinions of a volunteer group of general public participants on techniques for lifelog capture, and applications and functionality

    Video summarisation: A conceptual framework and survey of the state of the art

    Get PDF
    This is the post-print (final draft post-refereeing) version of the article. Copyright @ 2007 Elsevier Inc.Video summaries provide condensed and succinct representations of the content of a video stream through a combination of still images, video segments, graphical representations and textual descriptors. This paper presents a conceptual framework for video summarisation derived from the research literature and used as a means for surveying the research literature. The framework distinguishes between video summarisation techniques (the methods used to process content from a source video stream to achieve a summarisation of that stream) and video summaries (outputs of video summarisation techniques). Video summarisation techniques are considered within three broad categories: internal (analyse information sourced directly from the video stream), external (analyse information not sourced directly from the video stream) and hybrid (analyse a combination of internal and external information). Video summaries are considered as a function of the type of content they are derived from (object, event, perception or feature based) and the functionality offered to the user for their consumption (interactive or static, personalised or generic). It is argued that video summarisation would benefit from greater incorporation of external information, particularly user based information that is unobtrusively sourced, in order to overcome longstanding challenges such as the semantic gap and providing video summaries that have greater relevance to individual users

    Combining face detection and novelty to identify important events in a visual lifelog

    Get PDF
    The SenseCam is a passively capturing wearable camera, worn around the neck and takes an average of almost 2,000 images per day, which equates to over 650,000 images per year. It is used to create a personal lifelog or visual recording of the wearer’s life and generates information which can be helpful as a human memory aid. For such a large amount of visual information to be of any use, it is accepted that it should be structured into “events”, of which there are about 8,000 in a wearer’s average year. In automatically segmenting SenseCam images into events, it is desirable to automatically emphasise more important events and decrease the emphasis on mundane/routine events. This paper introduces the concept of novelty to help determine the importance of events in a lifelog. By combining novelty with face-to-face conversation detection, our system improves on previous approaches. In our experiments we use a large set of lifelog images, a total of 288,479 images collected by 6 users over a time period of one month each
    corecore