412 research outputs found

    Low-cost interactive active monocular range finder

    Full text link
    This paper describes a low-cost interactive active monocular range finder and illustrates the effect of introducing interactivity to the range acquisition process. The range finder consists of only one camera and a laser pointer, to which three LEDs are attached. When a user scans the laser along surfaces of objects, the camera captures the image of spots (one from the laser, and the others from LEDs), and triangulation is carried out using the camera\u27s viewing direction and the optical axis of the laser. The user interaction allows the range finder to acquire range data in which the sampling rate varies across the object depending on the underlying surface structures. Moreover, the processes of separating objects from the background and/or finding parts in the object can be achieved using the operator\u27s knowledge of the objects

    Multisensor-based human detection and tracking for mobile service robots

    Get PDF
    The one of fundamental issues for service robots is human-robot interaction. In order to perform such a task and provide the desired services, these robots need to detect and track people in the surroundings. In the present paper, we propose a solution for human tracking with a mobile robot that implements multisensor data fusion techniques. The system utilizes a new algorithm for laser-based legs detection using the on-board LRF. The approach is based on the recognition of typical leg patterns extracted from laser scans, which are shown to be very discriminative also in cluttered environments. These patterns can be used to localize both static and walking persons, even when the robot moves. Furthermore, faces are detected using the robot's camera and the information is fused to the legs position using a sequential implementation of Unscented Kalman Filter. The proposed solution is feasible for service robots with a similar device configuration and has been successfully implemented on two different mobile platforms. Several experiments illustrate the effectiveness of our approach, showing that robust human tracking can be performed within complex indoor environments

    Design and modeling of a stair climber smart mobile robot (MSRox)

    Full text link

    3D-TV Production from Conventional Cameras for Sports Broadcast

    Get PDF
    3DTV production of live sports events presents a challenging problem involving conflicting requirements of main- taining broadcast stereo picture quality with practical problems in developing robust systems for cost effective deployment. In this paper we propose an alternative approach to stereo production in sports events using the conventional monocular broadcast cameras for 3D reconstruction of the event and subsequent stereo rendering. This approach has the potential advantage over stereo camera rigs of recovering full scene depth, allowing inter-ocular distance and convergence to be adapted according to the requirements of the target display and enabling stereo coverage from both existing and ‘virtual’ camera positions without additional cameras. A prototype system is presented with results of sports TV production trials for rendering of stereo and free-viewpoint video sequences of soccer and rugby

    MonoSLAM: Real-time single camera SLAM

    No full text
    Published versio

    Autonomous robot systems and competitions: proceedings of the 12th International Conference

    Get PDF
    This is the 2012’s edition of the scientific meeting of the Portuguese Robotics Open (ROBOTICA’ 2012). It aims to disseminate scientific contributions and to promote discussion of theories, methods and experiences in areas of relevance to Autonomous Robotics and Robotic Competitions. All accepted contributions are included in this proceedings book. The conference program has also included an invited talk by Dr.ir. Raymond H. Cuijpers, from the Department of Human Technology Interaction of Eindhoven University of Technology, Netherlands.The conference is kindly sponsored by the IEEE Portugal Section / IEEE RAS ChapterSPR-Sociedade Portuguesa de Robótic

    GUI3DXBot: Una herramienta software interactiva para un robot móvil guía

    Get PDF
    Nowadays, mobile robots begin to appear in public places. To do these tasks properly, mobile robots must interact with humans. This paper presents the development of GUI3DXBot, a software tool for a tour-guide mobile robot. The paper focuses on the development of different software modules needed to guide users in an office building. In this context, GUI3DXBot is a server-client application, where the server side runs into the robot, and the client side runs into a 10-inch Android tablet. The GUI3DXBot server side is in charge of performing the perception, localization-mapping, and path planning tasks. The GUI3DXBot client side implements the human-robot interface that allows users requesting-canceling a tour-guide service, showing robot localization in the map, interacting with users, and tele-operating the robot in case of emergency. The contributions of this paper are twofold: it proposes a software modules design to guide users in an office building, and the whole robot system was well integrated and fully tested. GUI3DXBot were tested using software integration and field tests. The field tests were performed over a two-week period, and a survey to users was conducted. The survey results show that users think GUI3DXBot is friendly and intuitive, the goal selection was very easy, the interactive messages were very easy to understand, 90% of users found useful the robot icon on the map, users found useful drawing the path on the map, 90% of users found useful the local-global map view, and the guidance experience was very satisfactory (70%) and satisfactory (30%).Actualmente, los robots móviles inician a aparecer en lugares públicos. Para realizar estas tareas adecuadamente, los robots móviles deben interactuar con humanos. Este artículo presenta GUI3DXBot, un aplicativo para un robot móvil guía. Este artículo se enfoca en el desarrollo de los diferentes módulos software necesarios para guiar a usuarios en un edificio de oficinas. GUI3DXBot es una aplicación cliente-servidor, donde el lado del servidor se ejecuta en el robot, y el lado del cliente se ejecuta en una tableta de 10 pulgadas Android. El lado servidor de GUI3DXBot está a cargo de la percepción, localización-mapeo y planificación de rutas. El lado cliente de GUI3DXBot implementa la interfaz humano-robot que permite a los usuarios solicitar-cancelar un servicio de guía, mostrar la localización del robot en el mapa, interactuar con los usuarios, y tele-operar el robot en caso de emergencia. Las contribuciones de este artículo son dos: se propone un diseño de módulos software para guiar a usuarios en un edificio de oficinas, y que todo el sistema robótico está bien integrado y completamente probado. GUI3DXBot fue validada usando pruebas de integración y de campo. Las pruebas de campo fueron realizadas en un periodo de 2 semanas, y una encuesta a los usuarios fue llevada a cabo. Los resultados de la encuesta mostraron que los usuarios piensan que GUI3DXBot es amigable e intuitiva, la selección de metas fue fácil, pudieron entender los mensajes de interacción, 90% de los usuarios encontraron útil el ícono del robot sobre el mapa, encontraron útil dibujar la ruta planeada en el mapa, 90% de los usuarios encontraron útil la vista local-global del mapa, y la experiencia de guía fue muy satisfactoria (70%) y satisfactoria (30%)

    Automatic 3D human modeling: an initial stage towards 2-way inside interaction in mixed reality

    Get PDF
    3D human models play an important role in computer graphics applications from a wide range of domains, including education, entertainment, medical care simulation and military training. In many situations, we want the 3D model to have a visual appearance that matches that of a specific living person and to be able to be controlled by that person in a natural manner. Among other uses, this approach supports the notion of human surrogacy, where the virtual counterpart provides a remote presence for the human who controls the virtual character\u27s behavior. In this dissertation, a human modeling pipeline is proposed for the problem of creating a 3D digital model of a real person. Our solution involves reshaping a 3D human template with a 2D contour of the participant and then mapping the captured texture of that person to the generated mesh. Our method produces an initial contour of a participant by extracting the user image from a natural background. One particularly novel contribution in our approach is the manner in which we improve the initial vertex estimate. We do so through a variant of the ShortStraw corner-finding algorithm commonly used in sketch-based systems. Here, we develop improvements to ShortStraw, presenting an algorithm called IStraw, and then introduce adaptations of this improved version to create a corner-based contour segmentatiuon algorithm. This algorithm provides significant improvements on contour matching over previously developed systems, and does so with low computational complexity. The system presented here advances the state of the art in the following aspects. First, the human modeling process is triggered automatically by matching the participant\u27s pose with an initial pose through a tracking device and software. In our case, the pose capture and skeletal model are provided by the Microsoft Kinect and its associated SDK. Second, color image, depth data, and human tracking information from the Kinect and its SDK are used to automatically extract the contour of the participant and then generate a 3D human model with skeleton. Third, using the pose and the skeletal model, we segment the contour into eight parts and then match the contour points on each segment to a corresponding anchor set associated with a 3D human template. Finally, we map the color image of the person to the 3D model as its corresponding texture map. The whole modeling process only take several seconds and the resulting human model looks like the real person. The geometry of the 3D model matches the contour of the real person, and the model has a photorealistic texture. Furthermore, the mesh of the human model is attached to the skeleton provided in the template, so the model can support programmed animations or be controlled by real people. This human control is commonly done through a literal mapping (motion capture) or a gesture-based puppetry system. Our ultimate goal is to create a mixed reality (MR) system, in which the participants can manipulate virtual objects, and in which these virtual objects can affect the participant, e.g., by restricting their mobility. This MR system prototype design motivated the work of this dissertation, since a realistic 3D human model of the participant is an essential part of implementing this vision
    corecore