243 research outputs found

    Real-Time Panoramic Tracking for Event Cameras

    Full text link
    Event cameras are a paradigm shift in camera technology. Instead of full frames, the sensor captures a sparse set of events caused by intensity changes. Since only the changes are transferred, those cameras are able to capture quick movements of objects in the scene or of the camera itself. In this work we propose a novel method to perform camera tracking of event cameras in a panoramic setting with three degrees of freedom. We propose a direct camera tracking formulation, similar to state-of-the-art in visual odometry. We show that the minimal information needed for simultaneous tracking and mapping is the spatial position of events, without using the appearance of the imaged scene point. We verify the robustness to fast camera movements and dynamic objects in the scene on a recently proposed dataset and self-recorded sequences.Comment: Accepted to International Conference on Computational Photography 201

    3D modeling of indoor environments by a mobile platform with a laser scanner and panoramic camera

    Get PDF
    One major challenge of 3DTV is content acquisition. Here, we present a method to acquire a realistic, visually convincing D model of indoor environments based on a mobile platform that is equipped with a laser range scanner and a panoramic camera. The data of the 2D laser scans are used to solve the simultaneous lo- calization and mapping problem and to extract walls. Textures for walls and floor are built from the images of a calibrated panoramic camera. Multiresolution blending is used to hide seams in the gen- erated textures. The scene is further enriched by 3D-geometry cal- culated from a graph cut stereo technique. We present experimental results from a moderately large real environment.

    3D modeling of indoor environments by a mobile robot with a laser scanner and panoramic camera

    Get PDF
    We present a method to acquire a realistic, visually convincing 3D model of indoor office environments based on a mobile robot that is equipped with a laser range scanner and a panoramic camera. The data of the 2D laser scans are used to solve the SLAM problem and to extract walls. Textures for walls and floor are built from the images of a calibrated panoramic camera. Multi-resolution blending is used to hide seams in the generated textures

    Autonomous robot systems and competitions: proceedings of the 12th International Conference

    Get PDF
    This is the 2012’s edition of the scientific meeting of the Portuguese Robotics Open (ROBOTICA’ 2012). It aims to disseminate scientific contributions and to promote discussion of theories, methods and experiences in areas of relevance to Autonomous Robotics and Robotic Competitions. All accepted contributions are included in this proceedings book. The conference program has also included an invited talk by Dr.ir. Raymond H. Cuijpers, from the Department of Human Technology Interaction of Eindhoven University of Technology, Netherlands.The conference is kindly sponsored by the IEEE Portugal Section / IEEE RAS ChapterSPR-Sociedade Portuguesa de Robótic

    System Integration of a Tour Guide Robot

    Get PDF
    In today\u27s world, people visit many attractive places. On such an occasion, It is of utmost importance to be accompanied by a tour guide, who is known to explain about the cultural and historical importance of places. Due to the advancements in technology, smartphones today have the capability to help a person navigate to any place in the world and can itself act as a tour guide by explaining a significance of a place. However, the person while looking into his phone might not watch his/her step and might collide with other moving person or objects. With a phone tour guide, the person is alone and is missing a sense of contact with other travelers. therefore a human guide is necessary to provide tours for a group of visitors. However, Human tour guides might face tiredness, distraction, and the effects of repetitive tasks while providing tour service to visitors. Robots eliminate these problems and can provide tour consistently until it drains its battery. This experiment introduces a tour-guide robot that can be used on such an occasion. Tour guide robots can navigate autonomously in a known map of a given place and at the same time interact with people. The environment is equipped with artificial landmarks. Each landmark provides information about that specific region. An Animated avatar is simulated on the screen. IBM Watson provides voice recognition and text-to-speech services for human-robot interaction

    3D modeling of indoor environments for a robotic security guard

    Get PDF
    Autonomous mobile robots will play a major role in future security and surveillance tasks for large scale environments such as shopping malls, airports, hospitals and museums. Robotic security guards will autonomously survey such environments, unless a remote human operator takes over control. In this context a 3D model can convey much more useful information than the typical 2D maps used in many robotic applications today, both for visualisation of information and as human machine interface for remote control. This paper addresses the challenge of building such a model of a large environment (50m x 60m) using data from the robot’s own sensors: a 2D laser scanner and a panoramic camera. The data are processed in a pipeline that comprises automatic, semi-automatic and manual stages. The user can interact with the reconstruction process where necessary to ensure robustness and completeness of the model. A hybrid representation, tailored to the application, has been chosen: floors and walls are represented efficiently by textured planes. Non-planar structures like stairs and tables, which are represented by point clouds, can be added if desired. Our methods to extract these structures include: simultaneous localization and mapping in 2D and wall extraction based on laser scanner range data, building textures from multiple omni-directional images using multi-resolution blending, and calculation of 3D geometry by a graph cut stereo technique. Various renderings illustrate the usability of the model for visualising the security guard’s position and environment

    Extrinsic calibration of a set of range cameras in 5 seconds without pattern

    Get PDF
    International audienceThe integration of several range cameras in a mobile platform is useful for applications in mobile robotics and autonomous vehicles that require a large field of view. This situation is increasingly interesting with the advent of low cost range cameras like those developed by Primesense. Calibrating such combination of sensors for any geometric configuration is a problem that has been recently solved through visual odometry (VO) and SLAM. However, this kind of solution is laborious to apply, requiring robust SLAM or VO in controlled environments. In this paper we propose a new uncomplicated technique for extrinsic calibration of range cameras that relies on finding and matching planes. The method that we present serves to calibrate two or more range cameras in an arbitrary configuration, requiring only to observe one plane from differ- ent viewpoints. The conditions to solve the problem are studied, and several practical examples are presented covering different geometric configurations, including an omnidirectional RGB- D sensor composed of 8 range cameras. The quality of this calibration is evaluated with several experiments that demon- strate an improvement of accuracy over design parameters, while providing a versatile solution that is extremely fast and easy to apply

    Development and evaluation of vision processing algorithms in multi-robotic systems.

    Get PDF
    The trend in swarm robotics research is shifting to the design of more complicated systems in which the robots have abilities to form a robotic organism. In such systems, a single robot has very limited memory and processing resources, but the complete system is rich in these resources. As vision sensors provide rich surrounding awareness and vision algorithms also requires intensive processing. Therefore, vision processing tasks are the best candidate for distributed processing in such systems. To perform distributed vision processing, a number of scenarios are considered in swarm and the robotic organism form. In the swarm form, as the robots use low bandwidth wireless communication medium, so the exchange of simple visual features should be made between robots. This is addressed in a swarm mode scenario, where novel distance vector features are exchanged within a swarm of robots to generate a precise environmental map. The generated map facilitates the robot navigation in the environment. If features require encoding with high density information, then sharing of such features is not possible using the wireless channel with limited bandwidth. So methods were devised which process such features onboard and then share the process outcome to perform vision processing in a distributed fashion. This is shown in another swarm mode scenario in which a number of optimisation stages are followed and novel image pre-processing techniques are developed which enable the robots to perform onboard object recognition, and then share the process outcome in terms of object identity and its distance from the robot, to localise the objects. In the robotic organism, the use of reliable communication medium facilitates vision processing in distributed fashion, and this is presented in two scenarios. In the first scenario, the robotic organism detect objects in the environment in distributed fashion, but to get detailed surrounding awareness, the organism needs to learn these objects. This leads to a second scenario, which presents a modular approach to object classification and recognition. This approach provides a mechanism to learn newly detected objects and also ensure faster response to object recognition. Using the modular approach, it is also demonstrated that the collective use of 4 distributed processing resources in a robotic organism can provide 5 times the performance of an individual robot module. The overall performance was comparable to an individual less flexible robot (e.g., Pioneer-3AT) with significant higher processing capability
    • …
    corecore