9,013 research outputs found

    Real-time quality assessment of videos from body-worn cameras

    Get PDF
    Videos captured with body-worn cameras may be affected by distortions such as motion blur, overexposure and reduced contrast. Automated video quality assessment is therefore important prior to auto-tagging, event or object recognition, or automated editing. In this paper, we present M-BRISQUE, a spatial quality evaluator that combines, in real-time, the Michelson contrast with features from the Blind/Referenceless Image Spatial QUality Evaluator. To link the resulting quality score to human judgement, we train a Support Vector Regressor with Radial Basis Function kernel on the Computational and Subjective Image Quality database. We show an example of application of M-BRISQUE in automatic editing of multi-camera content using relative view quality, and validate its predictive performance with a subjective evaluation and two public datasets

    Analyzing the Use of Camera Glasses in the Wild

    Full text link
    Camera glasses enable people to capture point-of-view videos using a common accessory, hands-free. In this paper, we investigate how, when, and why people used one such product: Spectacles. We conducted 39 semi-structured interviews and surveys with 191 owners of Spectacles. We found that the form factor elicits sustained usage behaviors, and opens opportunities for new use-cases and types of content captured. We provide a usage typology, and highlight societal and individual factors that influence the classification of behaviors.Comment: In Proceedings of the 37th Annual ACM Conference on Human Factors in Computing Systems (CHI 2019). ACM, New York, NY, US

    Watching the Watchmen: Best Practices for Police Body Cameras

    Get PDF
    This paper examines the research on the costs and benefits of police body cameras, arguing that the devices can, if properly deployed and regulated, provide a valuable disincentive to police abuses as well as valuable evidence for punishing abuses when they occur

    Applying psychological science to the CCTV review process: a review of cognitive and ergonomic literature

    Get PDF
    As CCTV cameras are used more and more often to increase security in communities, police are spending a larger proportion of their resources, including time, in processing CCTV images when investigating crimes that have occurred (Levesley & Martin, 2005; Nichols, 2001). As with all tasks, there are ways to approach this task that will facilitate performance and other approaches that will degrade performance, either by increasing errors or by unnecessarily prolonging the process. A clearer understanding of psychological factors influencing the effectiveness of footage review will facilitate future training in best practice with respect to the review of CCTV footage. The goal of this report is to provide such understanding by reviewing research on footage review, research on related tasks that require similar skills, and experimental laboratory research about the cognitive skills underpinning the task. The report is organised to address five challenges to effectiveness of CCTV review: the effects of the degraded nature of CCTV footage, distractions and interrupts, the length of the task, inappropriate mindset, and variability in people’s abilities and experience. Recommendations for optimising CCTV footage review include (1) doing a cognitive task analysis to increase understanding of the ways in which performance might be limited, (2) exploiting technology advances to maximise the perceptual quality of the footage (3) training people to improve the flexibility of their mindset as they perceive and interpret the images seen, (4) monitoring performance either on an ongoing basis, by using psychophysiological measures of alertness, or periodically, by testing screeners’ ability to find evidence in footage developed for such testing, and (5) evaluating the relevance of possible selection tests to screen effective from ineffective screener

    Improving Sequential Determinantal Point Processes for Supervised Video Summarization

    Full text link
    It is now much easier than ever before to produce videos. While the ubiquitous video data is a great source for information discovery and extraction, the computational challenges are unparalleled. Automatically summarizing the videos has become a substantial need for browsing, searching, and indexing visual content. This paper is in the vein of supervised video summarization using sequential determinantal point process (SeqDPP), which models diversity by a probabilistic distribution. We improve this model in two folds. In terms of learning, we propose a large-margin algorithm to address the exposure bias problem in SeqDPP. In terms of modeling, we design a new probabilistic distribution such that, when it is integrated into SeqDPP, the resulting model accepts user input about the expected length of the summary. Moreover, we also significantly extend a popular video summarization dataset by 1) more egocentric videos, 2) dense user annotations, and 3) a refined evaluation scheme. We conduct extensive experiments on this dataset (about 60 hours of videos in total) and compare our approach to several competitive baselines
    corecore