19,141 research outputs found

    Prediction of Object Position based on Probabilistic Qualitative Spatial Relations

    Get PDF
    Due to recent and extensive advancements in the robotic and artificial intelligence fields, intelligent systems can be found, with increasing frequency, in many areas of daily life. From industrial and surgical purposes to space robots, such complex systems are present. However, as demands for robotics systems increase, sophisticated algorithms for use in robotic areas such as perception, navigation, or manipulation are required. Although some algorithms for such purposes exist, there are still open questions and challenges that must be addressed. Although robots are primarily used in the manufacturing industry, which has since been revolutionized by their precision and speed, there is a growing trend towards using service and personal robotics applications. The latter in particular must interact with humans naturally and effectively manage their environments, such as offices and homes. In contrast to the systems used in an industrial context, systems such as personal robots do not act in a predefined and fixed environment. Rather, these intelligent systems need an intrinsic comprehension of human environments to be able to support people in their daily life and manage common tasks such as preparing a breakfast table or cleaning a room. Crucially, these new robot systems require an entirely new level of capabilities to act in dynamic human environments. This thesis addresses how qualitative spatial relations can be used to find an objecta s most probable location and thus guide the search for a sought object. Because current approaches focus mainly on crisp, two-dimensional relations, which are not directly suitable for use in three-dimensional real-world applications, a formalism for a new type of spatial relations is proposed in this work. This theoretical approach is then applied on real-world data to evaluate its applicability for robotics purposes. The resulting validation of the approach demonstrates that the developed method performs well and can be used to enhance search for objects

    On Quantifying Qualitative Geospatial Data: A Probabilistic Approach

    Full text link
    Living in the era of data deluge, we have witnessed a web content explosion, largely due to the massive availability of User-Generated Content (UGC). In this work, we specifically consider the problem of geospatial information extraction and representation, where one can exploit diverse sources of information (such as image and audio data, text data, etc), going beyond traditional volunteered geographic information. Our ambition is to include available narrative information in an effort to better explain geospatial relationships: with spatial reasoning being a basic form of human cognition, narratives expressing such experiences typically contain qualitative spatial data, i.e., spatial objects and spatial relationships. To this end, we formulate a quantitative approach for the representation of qualitative spatial relations extracted from UGC in the form of texts. The proposed method quantifies such relations based on multiple text observations. Such observations provide distance and orientation features which are utilized by a greedy Expectation Maximization-based (EM) algorithm to infer a probability distribution over predefined spatial relationships; the latter represent the quantified relationships under user-defined probabilistic assumptions. We evaluate the applicability and quality of the proposed approach using real UGC data originating from an actual travel blog text corpus. To verify the quality of the result, we generate grid-based maps visualizing the spatial extent of the various relations

    The Right (Angled) Perspective: Improving the Understanding of Road Scenes Using Boosted Inverse Perspective Mapping

    Full text link
    Many tasks performed by autonomous vehicles such as road marking detection, object tracking, and path planning are simpler in bird's-eye view. Hence, Inverse Perspective Mapping (IPM) is often applied to remove the perspective effect from a vehicle's front-facing camera and to remap its images into a 2D domain, resulting in a top-down view. Unfortunately, however, this leads to unnatural blurring and stretching of objects at further distance, due to the resolution of the camera, limiting applicability. In this paper, we present an adversarial learning approach for generating a significantly improved IPM from a single camera image in real time. The generated bird's-eye-view images contain sharper features (e.g. road markings) and a more homogeneous illumination, while (dynamic) objects are automatically removed from the scene, thus revealing the underlying road layout in an improved fashion. We demonstrate our framework using real-world data from the Oxford RobotCar Dataset and show that scene understanding tasks directly benefit from our boosted IPM approach.Comment: equal contribution of first two authors, 8 full pages, 6 figures, accepted at IV 201

    Tracking by Prediction: A Deep Generative Model for Mutli-Person localisation and Tracking

    Full text link
    Current multi-person localisation and tracking systems have an over reliance on the use of appearance models for target re-identification and almost no approaches employ a complete deep learning solution for both objectives. We present a novel, complete deep learning framework for multi-person localisation and tracking. In this context we first introduce a light weight sequential Generative Adversarial Network architecture for person localisation, which overcomes issues related to occlusions and noisy detections, typically found in a multi person environment. In the proposed tracking framework we build upon recent advances in pedestrian trajectory prediction approaches and propose a novel data association scheme based on predicted trajectories. This removes the need for computationally expensive person re-identification systems based on appearance features and generates human like trajectories with minimal fragmentation. The proposed method is evaluated on multiple public benchmarks including both static and dynamic cameras and is capable of generating outstanding performance, especially among other recently proposed deep neural network based approaches.Comment: To appear in IEEE Winter Conference on Applications of Computer Vision (WACV), 201
    • …
    corecore