1,933 research outputs found

    AN IP-BASED LIVE DATABASE APPROACH TO SURVEILLANCE APPLICATION DEVELOPMENT

    Get PDF
    With the proliferation of inexpensive cameras, video surveillance applications are becoming ubiquitous in many domains such as public safety and security, manufacturing, intelligent transportation systems, and healthcare. IP-based video surveillance technologies, in particular, are able to bring traditional video surveillance centers to virtually any computer at any location with an Internet connection. Today’s IP-based video surveillance systems, however, are designed for specific classes of applications. For instance, one cannot use a system designed for incident detection on highways to monitor patients in a healthcare facility. To support rapid development of video surveillance applications, we designed and implemented a new class of general purpose database management system, the live video database management system (LVDBMS). We view networked IP cameras as a special class of storage devices, and allow the user to formulate ad hoc queries expressed over live video feeds. These continuous queries are processed in real time using novel distributed computing techniques. With this environment, the users are able to develop various specific web-based video surveillance systems for a variety of applications. These systems can coexist in a unified LVDBMS framework to share the expensive deployment and operating costs of the camera networks. Our contribution is the introduction of a live database approach to video surveillance software development. In this paper, we describe our prototype and present the live video data model, the query language, and the query processing technique. 1

    Knowledge Extraction in Video Through the Interaction Analysis of Activities

    Get PDF
    Video is a massive amount of data that contains complex interactions between moving objects. The extraction of knowledge from this type of information creates a demand for video analytics systems that uncover statistical relationships between activities and learn the correspondence between content and labels. However, those are open research problems that have high complexity when multiple actors simultaneously perform activities, videos contain noise, and streaming scenarios are considered. The techniques introduced in this dissertation provide a basis for analyzing video. The primary contributions of this research consist of providing new algorithms for the efficient search of activities in video, scene understanding based on interactions between activities, and the predicting of labels for new scenes

    Uncertainty-aware video visual analytics of tracked moving objects

    Get PDF
    Vast amounts of video data render manual video analysis useless while recent automatic video analytics techniques suffer from insufficient performance. To alleviate these issues we present a scalable and reliable approach exploiting the visual analytics methodology. This involves the user in the iterative process of exploration hypotheses generation and their verification. Scalability is achieved by interactive filter definitions on trajectory features extracted by the automatic computer vision stage. We establish the interface between user and machine adopting the VideoPerpetuoGram (VPG) for visualization and enable users to provide filter-based relevance feedback. Additionally users are supported in deriving hypotheses by context-sensitive statistical graphics. To allow for reliable decision making we gather uncertainties introduced by the computer vision step communicate these information to users through uncertainty visualization and grant fuzzy hypothesis formulation to interact with the machine. Finally we demonstrate the effectiveness of our approach by the video analysis mini challenge which was part of the IEEE Symposium on Visual Analytics Science and Technology 2009

    Automatic semantic video annotation in wide domain videos based on similarity and commonsense knowledgebases

    Get PDF
    In this paper, we introduce a novel framework for automatic Semantic Video Annotation. As this framework detects possible events occurring in video clips, it forms the annotating base of video search engine. To achieve this purpose, the system has to able to operate on uncontrolled wide-domain videos. Thus, all layers have to be based on generic features. This framework aims to bridge the "semantic gap", which is the difference between the low-level visual features and the human's perception, by finding videos with similar visual events, then analyzing their free text annotation to find a common area then to decide the best description for this new video using commonsense knowledgebases. Experiments were performed on wide-domain video clips from the TRECVID 2005 BBC rush standard database. Results from these experiments show promising integrity between those two layers in order to find expressing annotations for the input video. These results were evaluated based on retrieval performance
    corecore