13 research outputs found

    Multi-resolution image analysis for vehicle detection

    Get PDF
    Proceeding of: Second Iberian Conference, IbPRIA 2005, Estoril, Portugal, June 7-9, 2005Computer Vision can provide a great deal of assistance to Intelligent Vehicles. In this paper an Advanced Driver Assistance Systems for Vehicle Detection is presented. A geometric model of the vehicle is defined where its energy function includes information of the shape and symmetry of the vehicle and the shadow it produces. A genetic algorithm finds the optimum parameter values. As the algorithm receives information from a road detection module some geometric restrictions can be applied. A multi-resolution approach is used to speed up the algorithm and work in realtime. Examples of real images are shown to validate the algorithm.Publicad

    Goal-Directed Reasoning and Cooperation in Robots in Shared Workspaces: an Internal Simulation Based Neural Framework

    Get PDF
    From social dining in households to product assembly in manufacturing lines, goal-directed reasoning and cooperation with other agents in shared workspaces is a ubiquitous aspect of our day-to-day activities. Critical for such behaviours is the ability to spontaneously anticipate what is doable by oneself as well as the interacting partner based on the evolving environmental context and thereby exploit such information to engage in goal-oriented action sequences. In the setting of an industrial task where two robots are jointly assembling objects in a shared workspace, we describe a bioinspired neural architecture for goal-directed action planning based on coupled interactions between multiple internal models, primarily of the robot’s body and its peripersonal space. The internal models (of each robot’s body and peripersonal space) are learnt jointly through a process of sensorimotor exploration and then employed in a range of anticipations related to the feasibility and consequence of potential actions of two industrial robots in the context of a joint goal. The ensuing behaviours are demonstrated in a real-world industrial scenario where two robots are assembling industrial fuse-boxes from multiple constituent objects (fuses, fuse-stands) scattered randomly in their workspace. In a spatially unstructured and temporally evolving assembly scenario, the robots employ reward-based dynamics to plan and anticipate which objects to act on at what time instances so as to successfully complete as many assemblies as possible. The existing spatial setting fundamentally necessitates planning collision-free trajectories and avoiding potential collisions between the robots. Furthermore, an interesting scenario where the assembly goal is not realizable by either of the robots individually but only realizable if they meaningfully cooperate is used to demonstrate the interplay between perception, simulation of multiple internal models and the resulting complementary goal-directed actions of both robots. Finally, the proposed neural framework is benchmarked against a typically engineered solution to evaluate its performance in the assembly task. The framework provides a computational outlook to the emerging results from neurosciences related to the learning and use of body schema and peripersonal space for embodied simulation of action and prediction. While experiments reported here engage the architecture in a complex planning task specifically, the internal model based framework is domain-agnostic facilitating portability to several other tasks and platforms

    Inferring a spatial road representation from the behavior of real world traffic participants

    No full text
    Casapietra E, Weisswange TH, Goerick C, Kummert F. Inferring a spatial road representation from the behavior of real world traffic participants. In: 2016 IEEE Intelligent Vehicles Symposium (IV). Institute of Electrical and Electronics Engineers (IEEE); 2016

    Modeling peripersonal action space for virtual humans using touch and proprioception

    Get PDF
    Nguyen N, Wachsmuth I. Modeling peripersonal action space for virtual humans using touch and proprioception. In: Ruttkay Z, Kipp M, Nijholt A, Vilhjalmsson HH, eds. Proceedings of the 9th Conference on Intelligent Virtual Agents (IVA 2009). Berlin: Springer; 2009: 63-75.We propose a computational model for building a tactile body schema for a virtual human. The learned body structure of the agent can enable it to acquire a perception of the space surrounding its body, namely its peripersonal space. The model uses tactile and proprioceptive informations and relies on an algorithm which was originally applied with visual and proprioceptive sensor data. In order to feed the model, we present work on obtaining the nessessary sensory data only from touch sensors and the motor system. Based on this, we explain the learning process for a tactile body schema. As there is not only a technical motivation for devising such a model but also an application of peripersonal action space, an interaction example with a conversational agent is described
    corecore