27 research outputs found

    Understanding High-Level Semantics by Modeling Traffic Patterns

    Get PDF
    In this paper, we are interested in understanding the semantics of outdoor scenes in the context of autonomous driving. Towards this goal, we propose a generative model of 3D urban scenes which is able to reason not only about the geometry and objects present in the scene, but also about the high-level semantics in the form of traffic patterns. We found that a small number of patterns is sufficient to model the vast majority of traffic scenes and show how these patterns can be learned. As evidenced by our experiments, this high-level reasoning significantly improves the overall scene estimation as well as the vehicle-to-lane association when compared to state-of-the-art approaches [10]. Figure 1. Inference failure when ignoring high-order dependencies: In [10] high-order dependencies between objects are ignored, leading to physically implausible inference results with colliding vehicles (left). We propose to explicitly account for traffic patterns (right, correct situation marked in red), thereby substantially improving scene layout and activity estimation results. 1

    Object Sub-Categorization and Common Framework Method using Iterative AdaBoost for Rapid Detection of Multiple Objects

    Get PDF
    Object detection and tracking in real time has numerous applications and benefits in various fields like survey, crime detection etc. The idea of gaining useful information from real time scenes on the roads is called as Traffic Scene Perception (TSP). TSP actually consists of three subtasks namely, detecting things of interest, recognizing the discovered objects and tracking of the moving objects. Normally the results obtained could be of value in object recognition and tracking, however the detection of a particular object of interest is of higher value in any real time scenario. The prevalent systems focus on developing unique detectors for each of the above-mentioned subtasks and they work upon utilizing different features. This obviously is time consuming and involves multiple redundant operations. Hence in this paper a common framework using the enhanced AdaBoost algorithm is proposed which will examine all dense characteristics only once thereby increasing the detection speed substantially. An object sub-categorization strategy is proposed to capture the intra-class variance of objects in order to boost generalisation performance even more. We use three detection applications to demonstrate the efficiency of the proposed framework: traffic sign detection, car detection, and bike detection. On numerous benchmark data sets, the proposed framework delivers competitive performance using state-of-the-art techniques

    Towards Scene Understanding with Detailed 3D Object Representations

    Full text link
    Current approaches to semantic image and scene understanding typically employ rather simple object representations such as 2D or 3D bounding boxes. While such coarse models are robust and allow for reliable object detection, they discard much of the information about objects' 3D shape and pose, and thus do not lend themselves well to higher-level reasoning. Here, we propose to base scene understanding on a high-resolution object representation. An object class - in our case cars - is modeled as a deformable 3D wireframe, which enables fine-grained modeling at the level of individual vertices and faces. We augment that model to explicitly include vertex-level occlusion, and embed all instances in a common coordinate frame, in order to infer and exploit object-object interactions. Specifically, from a single view we jointly estimate the shapes and poses of multiple objects in a common 3D frame. A ground plane in that frame is estimated by consensus among different objects, which significantly stabilizes monocular 3D pose estimation. The fine-grained model, in conjunction with the explicit 3D scene model, further allows one to infer part-level occlusions between the modeled objects, as well as occlusions by other, unmodeled scene elements. To demonstrate the benefits of such detailed object class models in the context of scene understanding we systematically evaluate our approach on the challenging KITTI street scene dataset. The experiments show that the model's ability to utilize image evidence at the level of individual parts improves monocular 3D pose estimation w.r.t. both location and (continuous) viewpoint.Comment: International Journal of Computer Vision (appeared online on 4 November 2014). Online version: http://link.springer.com/article/10.1007/s11263-014-0780-
    corecore