2,900 research outputs found

    Attentive monitoring of multiple video streams driven by a Bayesian foraging strategy

    Full text link
    In this paper we shall consider the problem of deploying attention to subsets of the video streams for collating the most relevant data and information of interest related to a given task. We formalize this monitoring problem as a foraging problem. We propose a probabilistic framework to model observer's attentive behavior as the behavior of a forager. The forager, moment to moment, focuses its attention on the most informative stream/camera, detects interesting objects or activities, or switches to a more profitable stream. The approach proposed here is suitable to be exploited for multi-stream video summarization. Meanwhile, it can serve as a preliminary step for more sophisticated video surveillance, e.g. activity and behavior analysis. Experimental results achieved on the UCR Videoweb Activities Dataset, a publicly available dataset, are presented to illustrate the utility of the proposed technique.Comment: Accepted to IEEE Transactions on Image Processin

    Towards an optimal design for ecosystem-level ocean observatories

    Get PDF
    Four operational factors, together with high development cost, currently limit the use of ocean observatories in ecological and fisheries applications: 1) limited spatial coverage; 2) limited integration of multiple types of technologies; 3) limitations in the experimental design for in situ studies; and 4) potential unpredicted bias in monitoring outcomes due to the infrastructure’s presence and functioning footprint. To address these limitations, we propose a novel concept of a standardized “ecosystem observatory module” structure composed of a central node and three tethered satellite pods together with permanent mobile platforms. The module would be designed with a rigid spatial configuration to optimize overlap among multiple observation technologies each providing 360° coverage around the module, including permanent stereo-video cameras, acoustic imaging sonar cameras, horizontal multi-beam echosounders and a passive acoustic array. The incorporation of multiple integrated observation technologies would enable unprecedented quantification of macrofaunal composition, abundance and density surrounding the module, as well as the ability to track the movements of individual fishes and macroinvertebrates. Such a standardized modular design would allow for the hierarchical spatial connection of observatory modules into local module clusters and larger geographic module networks, providing synoptic data within and across linked ecosystems suitable for fisheries and ecosystem level monitoring on multiple scales.Peer ReviewedPostprint (author's final draft

    Algorithms for trajectory integration in multiple views

    Get PDF
    PhDThis thesis addresses the problem of deriving a coherent and accurate localization of moving objects from partial visual information when data are generated by cameras placed in di erent view angles with respect to the scene. The framework is built around applications of scene monitoring with multiple cameras. Firstly, we demonstrate how a geometric-based solution exploits the relationships between corresponding feature points across views and improves accuracy in object location. Then, we improve the estimation of objects location with geometric transformations that account for lens distortions. Additionally, we study the integration of the partial visual information generated by each individual sensor and their combination into one single frame of observation that considers object association and data fusion. Our approach is fully image-based, only relies on 2D constructs and does not require any complex computation in 3D space. We exploit the continuity and coherence in objects' motion when crossing cameras' elds of view. Additionally, we work under the assumption of planar ground plane and wide baseline (i.e. cameras' viewpoints are far apart). The main contributions are: i) the development of a framework for distributed visual sensing that accounts for inaccuracies in the geometry of multiple views; ii) the reduction of trajectory mapping errors using a statistical-based homography estimation; iii) the integration of a polynomial method for correcting inaccuracies caused by the cameras' lens distortion; iv) a global trajectory reconstruction algorithm that associates and integrates fragments of trajectories generated by each camera
    corecore