155,139 research outputs found

    SAVASA project @ TRECVID 2012: interactive surveillance event detection

    Get PDF
    In this paper we describe our participation in the interactive surveillance event detection task at TRECVid 2012. The system we developed was comprised of individual classifiers brought together behind a simple video search interface that enabled users to select relevant segments based on down~sampled animated gifs. Two types of user -- `experts' and `end users' -- performed the evaluations. Due to time constraints we focussed on three events -- ObjectPut, PersonRuns and Pointing -- and two of the five available cameras (1 and 3). Results from the interactive runs as well as discussion of the performance of the underlying retrospective classifiers are presented

    SAIVT-QUT@TRECVid 2012: Interactive surveillance event detection

    Get PDF
    In this paper, we propose an approach which attempts to solve the problem of surveillance event detection, assuming that we know the definition of the events. To facilitate the discussion, we first define two concepts. The event of interest refers to the event that the user requests the system to detect; and the background activities are any other events in the video corpus. This is an unsolved problem due to many factors as listed below: 1) Occlusions and clustering: The surveillance scenes which are of significant interest at locations such as airports, railway stations, shopping centers are often crowded, where occlusions and clustering of people are frequently encountered. This significantly affects the feature extraction step, and for instance, trajectories generated by object tracking algorithms are usually not robust under such a situation. 2) The requirement for real time detection: The system should process the video fast enough in both of the feature extraction and the detection step to facilitate real time operation. 3) Massive size of the training data set: Suppose there is an event that lasts for 1 minute in a video with a frame rate of 25fps, the number of frames for this events is 60X25 = 1500. If we want to have a training data set with many positive instances of the event, the video is likely to be very large in size (i.e. hundreds of thousands of frames or more). How to handle such a large data set is a problem frequently encountered in this application. 4) Difficulty in separating the event of interest from background activities: The events of interest often co-exist with a set of background activities. Temporal groundtruth typically very ambiguous, as it does not distinguish the event of interest from a wide range of co-existing background activities. However, it is not practical to annotate the locations of the events in large amounts of video data. This problem becomes more serious in the detection of multi-agent interactions, since the location of these events can often not be constrained to within a bounding box. 5) Challenges in determining the temporal boundaries of the events: An event can occur at any arbitrary time with an arbitrary duration. The temporal segmentation of events is difficult and ambiguous, and also affected by other factors such as occlusions

    TRECVID 2008 - goals, tasks, data, evaluation mechanisms and metrics

    Get PDF
    The TREC Video Retrieval Evaluation (TRECVID) 2008 is a TREC-style video analysis and retrieval evaluation, the goal of which remains to promote progress in content-based exploitation of digital video via open, metrics-based evaluation. Over the last 7 years this effort has yielded a better understanding of how systems can effectively accomplish such processing and how one can reliably benchmark their performance. In 2008, 77 teams (see Table 1) from various research organizations --- 24 from Asia, 39 from Europe, 13 from North America, and 1 from Australia --- participated in one or more of five tasks: high-level feature extraction, search (fully automatic, manually assisted, or interactive), pre-production video (rushes) summarization, copy detection, or surveillance event detection. The copy detection and surveillance event detection tasks are being run for the first time in TRECVID. This paper presents an overview of TRECVid in 2008

    A Bayesian Network Model for Spatio-Temporal Event Surveillance

    Get PDF
    Event surveillance involves analyzing a region in order to detect patterns that are indicative of some event of interest. An example is the monitoring of information about emergency department visits to detect a disease outbreak. Spatial event surveillance involves analyzing spatial patterns of evidence that are indicative of the event of interest. A special case of spatial event surveillance is spatial cluster detection, which searches for subregions in which the count of an event of interest is higher than expected. Temporal event surveillance involves monitoring for emerging temporal patterns. Spatio-temporal event surveillance involves joint spatial and temporal monitoring.When the events observed are of direct interest, then analyzing counts of those events is generally the preferred approach. However, in event surveillance we often only observe events that are indirectly related to the events of interest. For example, during an influenza outbreak, we may only have information about the chief complaints of patients who visited emergency departments. In this situation, a better surveillance approach may be to model the relationships among the events of interest and those observed.I developed a high-level Bayesian network architecture that represents a class of spatial event surveillance models, which I call BayesNet-S. I also developed an architecture that represents a class of temporal event surveillance models called BayesNet-T. These Bayesian network architectures are combined into a single architecture that represents a class of spatio-temporal models called BayesNet-ST. Using these architectures, it is often possible to construct a temporal, spatial, or spatio-temporal model from an existing Bayesian network event-surveillance model that is non-spatial and non-temporal. My general hypothesis is that when an existing model is extended to incorporate space and time, event surveillance will be improved.PANDA-CDCA (PC) (Cooper et al., 2007) is a non-temporal, non-spatial disease outbreak detection system. I extended PC both spatially and temporally. My specific hypothesis is that each of the spatial and temporal extensions of PC will perform outbreak detection better than does PC, and that the combined use of the spatial and temporal extensions will perform better than either extension alone.The experimental results obtained in this research support this hypothesis

    SAVASA project @ TRECVid 2013: semantic indexing and interactive surveillance event detection

    Get PDF
    In this paper we describe our participation in the semantic indexing (SIN) and interactive surveillance event detection (SED) tasks at TRECVid 2013 [11]. Our work was motivated by the goals of the EU SAVASA project (Standards-based Approach to Video Archive Search and Analysis) which supports search over multiple video archives. Our aims were: to assess a standard object detection methodology (SIN); evaluate contrasting runs in automatic event detection (SED) and deploy a distributed, cloud-based search interface for the interactive component of the SED task. Results from the SIN task, underlying retrospective classifiers for the surveillance event detection and a discussion of the contrasting aims of the SAVASA user interface compared with the TRECVid task requirements are presented

    Economics analysis of mitigation strategies for FMD introduction in highly concentrated animal feeding regions

    Get PDF
    Outbreaks of infectious animal diseases can lead to substantial losses as evidenced by 2003 US BSE (Bovine Spongiform Encephalopathy) event with consequent loss of export markets, and the 2001 UK FMD (Foot and Mouth Disease) outbreak that has cost estimates in the billions. In this paper we present a linked epidemiologic-economic modeling framework which is used to investigate several FMD mitigation strategies under the context of an FMD outbreak in a concentrated cattle feeding region in the US. In this study we extend the literature by investigating the economic effectiveness of some previously unaddressed strategies including early detection, enhanced vaccine availability, and enhanced surveillance under various combinations of slaughter, surveillance, and vaccination. We also consider different disease introduction points at a large feedlot, a backgrounder feedlot, a large grazing herd, and a backyard herd all in the Texas High Plains. In terms of disease mitigation strategies we evaluate the economic effectiveness of: 1. Speeding up initial detection by one week from day 14 to day 7 after initial infection; 2. Speeding up vaccine availability from one week post disease detection to the day of disease detection; 3.Doubling post event surveillance intensity. To examine the economic implications of these strategies we use a two component stochastic framework. The first component is the epidemiologic model that simulates the spread of FMD as affected by control policies and introduction scenarios. The second component is an economics module, which calculates an estimate of cattle industry losses plus the costs of implementing disease control. The results show that early detection of the disease is the most effective mechanism for minimizing the costs of outbreak. Under some circumstances enhanced surveillance also proved to be an effective strategy.Livestock Production/Industries,

    Audio-Video Event Recognition System For Public Transport Security

    Get PDF
    International audienceThis paper presents an audio-video surveillance system for the automatic surveillance in public transport vehicle. The system comprises six modules including in particular three novel ones: (i) Face Detection and Tracking, (ii) Audio Event Detection and (iii) Audio-Video Scenario Recognition. The Face Detection and Tracking module is responsible for detecting and tracking faces of people in front of cameras. The Audio Event Detection module detects abnormal audio events which are precursor for detecting scenarios which have been predefined by end-users. The Audio-Video Scenario Recognition module performs high level interpretation of the observed objects by combining audio and video events based on spatio-temporal reasoning. The performance of the system is evaluated for a series of pre-defined audio, video and audio-video events specified using an audio-video event ontology
    • 

    corecore