2,084 research outputs found

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio

    Automating the construction of scene classifiers for content-based video retrieval

    Get PDF
    This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a two stage procedure. First, small image fragments called patches are classified. Second, frequency vectors of these patch classifications are fed into a second classifier for global scene classification (e.g., city, portraits, or countryside). The first stage classifiers can be seen as a set of highly specialized, learned feature detectors, as an alternative to letting an image processing expert determine features a priori. We present results for experiments on a variety of patch and image classes. The scene classifier has been used successfully within television archives and for Internet porn filtering

    Crowd Size Estimation and Detecting Social Distancing Using Raspberry PI and Opencv

    Get PDF
    In this covid19 pandemic the number of people gathering at public places and festivals are restricted and maintaining social distancing is practiced throughout the world. Managing the crowd is always a challenging task. It requires some kind of monitoring technology. In this paper, we develop a device that detects and provide human count and also detects people who are not maintaining social distancing . The work depicted above was finished using a Raspberry Pi 3 board with OpenCV-Python.This method can effectively manage crowds

    Human Detection and Tracking Using Hog Feature and Particle Filter

    Get PDF
    Video surveillance system has recently attracted much attention in various fields for monitoring and ensuring security. One of its promising applications is in crowd control to maintain the general security in public places. However, the problem of video surveillance systems is the required continuous manual monitoring especially for crime deterrence. In order to assist the security monitoring the live surveillance systems, intelligent target detection and tracking techniques can send a warning signal to the monitoring officers automatically. Towards this end, in this paper, we propose an innovative method to detect and track a target person in a crowded area using the individual’s features. In the proposed method, to realize automatic detection and tracking we combine Histogram of Oriented Gradient (HOG) feature detection with a particle filter. The HOG feature is applied for the description of contour detection for the person, while the particle filter is used for tracking the targets using skin and clothes color based features. We have developed the evaluation system implementing our proposed method. From the experimental results, we have achieved high accuracy detection rate and tracked the specific target precisely

    Unsupervised Understanding of Location and Illumination Changes in Egocentric Videos

    Full text link
    Wearable cameras stand out as one of the most promising devices for the upcoming years, and as a consequence, the demand of computer algorithms to automatically understand the videos recorded with them is increasing quickly. An automatic understanding of these videos is not an easy task, and its mobile nature implies important challenges to be faced, such as the changing light conditions and the unrestricted locations recorded. This paper proposes an unsupervised strategy based on global features and manifold learning to endow wearable cameras with contextual information regarding the light conditions and the location captured. Results show that non-linear manifold methods can capture contextual patterns from global features without compromising large computational resources. The proposed strategy is used, as an application case, as a switching mechanism to improve the hand-detection problem in egocentric videos.Comment: Submitted for publicatio
    • …
    corecore