11 research outputs found

    An Extended Event Reasoning Framework for Decision Support under Uncertainty

    Get PDF
    Abstract. To provide in-time reactions to a large volume of surveil-lance data, uncertainty-enabled event reasoning frameworks for CCTV and sensor based intelligent surveillance system have been integrated to model and infer events of interest. However, most of the existing works do not consider decision making under uncertainty which is important for surveillance operators. In this paper, we extend an event reasoning framework for decision support, which enables our framework to predict, rank and alarm threats from multiple heterogeneous sources.

    Context-Based Scene Recognition Using Bayesian Networks with Scale-Invariant Feature Transform

    No full text
    Abstract. Scene understanding is an important problem in intelligent robotics. Since visual information is uncertain due to several reasons, we need a novel method that has robustness to the uncertainty. Bayesian probabilistic approach is robust to manage the uncertainty, and powerful to model high-level contexts like the relationship between places and objects. In this paper, we propose a context-based Bayesian method with SIFT for scene understanding. At first, image pre-processing extracts features from vision information and objects-existence information is extracted by SIFT that is rotation and scale invariant. This information is provided to Bayesian networks for robust inference in scene understanding. Experiments in complex real environments show that the pro-posed method is useful.

    Categorizing Perceptions of Indoor Rooms Using 3D Features

    No full text
    Swadzba A, Wachsmuth S. Categorizing Perceptions of Indoor Rooms Using 3D Features. In: Structural, Syntactic, and Statistical Pattern Recognition : joint IAPR international workshop, SSPR & SPR 2008, Orlando, USA, December 4 - 6, 2008 ; proceedings. Lecture Notes in Computer Science; 5342. Springer; 2008: 744-754.In this paper, we propose a holistic classification scheme for different room types, like office or meeting room, based on 3D features. Such a categorization of scenes provides a rich source of information about potential objects, object locations, and activities typically found in them. Scene categorization is a challenging task. While outdoor scenes can be sufficiently characterized by color and texture features, indoor scenes consist of human-made structures that vary in terms of color and texture across different individual rooms of the same category. Nevertheless, humans tend to have an immediate impression in which room type they are. We suggest that such a decision could be based on the coarse spatial layout of a scene. Therefore, we present a system that categorizes different room types based on 3D sensor data extracted by a Time-of-Flight (ToF) camera. We extract planar structures combining region growing and RANSAC approaches. Then, feature vectors are defined on statistics over the relative sizes of the planar patches, the angles between pairs of (close) patches, and the ratios between sizes of pairs of patches to train classifiers. Experiments in a mobile robot scenario study the performance in classifying a room based on a single percept

    A recognition nework model-based approach to dynamic image understanding

    No full text
    International audienceIn this paper, we present definitions for a dynamic knowledge-based image understanding system. From a sequence of grey level images, the system produces a flow of image interpretations. We use a semantic network to represent the knowledge embodied in the system. Dynamic representation is achieved by ahypotheses network. This network is a graph in which nodes represent information and arcs relations. A control strategy performs a continuous update of this network. The originality of our work lies in the control strategy: it includes astructure tracking phase, using the representation structure obtained from previous images to reduce the computational complexity of understanding processes. We demonstrate that in our case the computational complexity, which is exponential if we only use a purely data-driven bottom-up scheme, is polynomial when using the hypotheses tracking mechanism. This is to say that gain improvement in computation time is a major reason for dynamic understanding. The proposed system is implemented; experimental results of road mark detection and tracking are given
    corecore