20,603 research outputs found

    A Fuzzy Logic-Based System for Soccer Video Scenes Classification

    Get PDF
    Massive global video surveillance worldwide captures data but lacks detailed activity information to flag events of interest, while the human burden of monitoring video footage is untenable. Artificial intelligence (AI) can be applied to raw video footage to identify and extract required information and summarize it in linguistic formats. Video summarization automation usually involves text-based data such as subtitles, segmenting text and semantics, with little attention to video summarization in the processing of video footage only. Classification problems in recorded videos are often very complex and uncertain due to the dynamic nature of the video sequence and light conditions, background, camera angle, occlusions, indistinguishable scene features, etc. Video scene classification forms the basis of linguistic video summarization, an open research problem with major commercial importance. Soccer video scenes present added challenges due to specific objects and events with similar features (e.g. “people” include audiences, coaches, and players), as well as being constituted from a series of quickly changing and dynamic frames with small inter-frame variations. There is an added difficulty associated with the need to have light weight video classification systems working in real time with massive data sizes. In this thesis, we introduce a novel system based on Interval Type-2 Fuzzy Logic Classification Systems (IT2FLCS) whose parameters are optimized by the Big Bang–Big Crunch (BB-BC) algorithm, which allows for the automatic scenes classification using optimized rules in broadcasted soccer matches video. The type-2 fuzzy logic systems would be unequivocal to present a highly interpretable and transparent model which is very suitable for the handling the encountered uncertainties in video footages and converting the accumulated data to linguistic formats which can be easily stored and analysed. Meanwhile the traditional black box techniques, such as support vector machines (SVMs) and neural networks, do not provide models which could be easily analysed and understood by human users. The BB-BC optimization is a heuristic, population-based evolutionary approach which is characterized by the ease of implementation, fast convergence and low computational cost. We employed the BB-BC to optimize our system parameters of fuzzy logic membership functions and fuzzy rules. Using the BB-BC we are able to balance the system transparency (through generating a small rule set) together with increasing the accuracy of scene classification. Thus, the proposed fuzzy-based system allows achieving relatively high classification accuracy with a small number of rules thus increasing the system interpretability and allowing its real-time processing. The type-2 Fuzzy Logic Classification System (T2FLCS) obtained 87.57% prediction accuracy in the scene classification of our testing group data which is better than the type-1 fuzzy classification system and neural networks counterparts. The BB-BC optimization algorithms decrease the size of rule bases both in T1FLCS and T2FLCS; the T2FLCS finally got 85.716% with reduce rules, outperforming the T1FLCS and neural network counterparts, especially in the “out-of-range data” which validates the T2FLCSs capability to handle the high level of faced uncertainties. We also presented a novel approach based on the scenes classification system combined with the dynamic time warping algorithm to implement the video events detection for real world processing. The proposed system could run on recorded or live video clips and output a label to describe the event in order to provide the high level summarization of the videos to the user

    A Neural System for Automated CCTV Surveillance

    Get PDF
    This paper overviews a new system, the “Owens Tracker,” for automated identification of suspicious pedestrian activity in a car-park. Centralized CCTV systems relay multiple video streams to a central point for monitoring by an operator. The operator receives a continuous stream of information, mostly related to normal activity, making it difficult to maintain concentration at a sufficiently high level. While it is difficult to place quantitative boundaries on the number of scenes and time period over which effective monitoring can be performed, Wallace and Diffley [1] give some guidance, based on empirical and anecdotal evidence, suggesting that the number of cameras monitored by an operator be no greater than 16, and that the period of effective monitoring may be as low as 30 minutes before recuperation is required. An intelligent video surveillance system should therefore act as a filter, censuring inactive scenes and scenes showing normal activity. By presenting the operator only with unusual activity his/her attention is effectively focussed, and the ratio of cameras to operators can be increased. The Owens Tracker learns to recognize environmentspecific normal behaviour, and refers sequences of unusual behaviour for operator attention. The system was developed using standard low-resolution CCTV cameras operating in the car-parks of Doxford Park Industrial Estate (Sunderland, Tyne and Wear), and targets unusual pedestrian behaviour. The modus operandi of the system is to highlight excursions from a learned model of normal behaviour in the monitored scene. The system tracks objects and extracts their centroids; behaviour is defined as the trajectory traced by an object centroid; normality as the trajectories typically encountered in the scene. The essential stages in the system are: segmentation of objects of interest; disambiguation and tracking of multiple contacts, including the handling of occlusion and noise, and successful tracking of objects that “merge” during motion; identification of unusual trajectories. These three stages are discussed in more detail in the following sections, and the system performance is then evaluated

    Every Moment Counts: Dense Detailed Labeling of Actions in Complex Videos

    Full text link
    Every moment counts in action recognition. A comprehensive understanding of human activity in video requires labeling every frame according to the actions occurring, placing multiple labels densely over a video sequence. To study this problem we extend the existing THUMOS dataset and introduce MultiTHUMOS, a new dataset of dense labels over unconstrained internet videos. Modeling multiple, dense labels benefits from temporal relations within and across classes. We define a novel variant of long short-term memory (LSTM) deep networks for modeling these temporal relations via multiple input and output connections. We show that this model improves action labeling accuracy and further enables deeper understanding tasks ranging from structured retrieval to action prediction.Comment: To appear in IJC

    Self-supervised Spatio-temporal Representation Learning for Videos by Predicting Motion and Appearance Statistics

    Full text link
    We address the problem of video representation learning without human-annotated labels. While previous efforts address the problem by designing novel self-supervised tasks using video data, the learned features are merely on a frame-by-frame basis, which are not applicable to many video analytic tasks where spatio-temporal features are prevailing. In this paper we propose a novel self-supervised approach to learn spatio-temporal features for video representation. Inspired by the success of two-stream approaches in video classification, we propose to learn visual features by regressing both motion and appearance statistics along spatial and temporal dimensions, given only the input video data. Specifically, we extract statistical concepts (fast-motion region and the corresponding dominant direction, spatio-temporal color diversity, dominant color, etc.) from simple patterns in both spatial and temporal domains. Unlike prior puzzles that are even hard for humans to solve, the proposed approach is consistent with human inherent visual habits and therefore easy to answer. We conduct extensive experiments with C3D to validate the effectiveness of our proposed approach. The experiments show that our approach can significantly improve the performance of C3D when applied to video classification tasks. Code is available at https://github.com/laura-wang/video_repres_mas.Comment: CVPR 201
    • …
    corecore