3,960 research outputs found

    An intelligent real time 3D vision system for robotic welding tasks

    Get PDF
    MARWIN is a top-level robot control system that has been designed for automatic robot welding tasks. It extracts welding parameters and calculates robot trajectories directly from CAD models which are then verified by real-time 3D scanning and registration. MARWIN's 3D computer vision provides a user-centred robot environment in which a task is specified by the user by simply confirming and/or adjusting suggested parameters and welding sequences. The focus of this paper is on describing a mathematical formulation for fast 3D reconstruction using structured light together with the mechanical design and testing of the 3D vision system and show how such technologies can be exploited in robot welding tasks

    Intelligent composite layup by the application of low cost tracking and projection technologies

    Get PDF
    Hand layup is still the dominant forming process for the creation of the widest range of complex geometry and mixed material composite parts. However, this process is still poorly understood and informed, limiting productivity. This paper seeks to address this issue by proposing a novel and low cost system enabling a laminator to be guided in real-time, based on a predetermined instruction set, thus improving the standardisation of produced components. Within this paper the current methodologies are critiqued and future trends are predicted, prior to introducing the required input and outputs, and developing the implemented system. As a demonstrator a U-Shaped component typical of the complex geometry found in many difficult to manufacture composite parts was chosen, and its drapeability assessed by the use of a kinematic drape simulation tool. An experienced laminator's knowledgebase was then used to divide the tool into a finite number of features, with layup conducted by projecting and sequentially highlighting target features while tracking a laminator's hand movements across the ply. The system has been implemented with affordable hardware and demonstrates tangible benefits in comparison to currently employed laser-based systems. It has shown remarkable success to date, with rapid Technology Readiness Level advancement. This is a major stepping stone towards augmenting manual labour, with further benefits including more appropriate automation

    Motionless active depth from defocus system using smart optics for camera autofocus applications

    Get PDF
    This paper describes a motionless active Depth from Defocus (DFD) system design suited for long working range camera autofocus applications. The design consists of an active illumination module that projects a scene illuminating coherent conditioned optical radiation pattern which maintains its sharpness over multiple axial distances allowing an increased DFD working distance range. The imager module of the system responsible for the actual DFD operation deploys an electronically controlled variable focus lens (ECVFL) as a smart optic to enable a motionless imager design capable of effective DFD operation. An experimental demonstration is conducted in the laboratory which compares the effectiveness of the coherent conditioned radiation module versus a conventional incoherent active light source, and demonstrates the applicability of the presented motionless DFD imager design. The fast response and no-moving-parts features of the DFD imager design are especially suited for camera scenarios where mechanical motion of lenses to achieve autofocus action is challenging, for example, in the tiny camera housings in smartphones and tablets. Applications for the proposed system include autofocus in modern day digital cameras

    Advances in Human Robot Interaction for Cloud Robotics applications

    Get PDF
    In this thesis are analyzed different and innovative techniques for Human Robot Interaction. The focus of this thesis is on the interaction with flying robots. The first part is a preliminary description of the state of the art interactions techniques. Then the first project is Fly4SmartCity, where it is analyzed the interaction between humans (the citizen and the operator) and drones mediated by a cloud robotics platform. Then there is an application of the sliding autonomy paradigm and the analysis of different degrees of autonomy supported by a cloud robotics platform. The last part is dedicated to the most innovative technique for human-drone interaction in the User’s Flying Organizer project (UFO project). This project wants to develop a flying robot able to project information into the environment exploiting concepts of Spatial Augmented Realit

    Laser Pointer Tracking in Projector-Augmented Architectural Environments

    Get PDF
    We present a system that applies a custom-built pan-tilt-zoom camera for laser-pointer tracking in arbitrary real environments. Once placed in a building environment, it carries out a fully automatic self-registration, registrations of projectors, and sampling of surface parameters, such as geometry and reflectivity. After these steps, it can be used for tracking a laser spot on the surface as well as an LED marker in 3D space, using inter-playing fisheye context and controllable detail cameras. The captured surface information can be used for masking out areas that are critical to laser-pointer tracking, and for guiding geometric and radiometric image correction techniques that enable a projector-based augmentation on arbitrary surfaces. We describe a distributed software framework that couples laser-pointer tracking for interaction, projector-based AR as well as video see-through AR for visualizations with the domain specific functionality of existing desktop tools for architectural planning, simulation and building surveying

    Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect

    Full text link
    Recently, the new Kinect One has been issued by Microsoft, providing the next generation of real-time range sensing devices based on the Time-of-Flight (ToF) principle. As the first Kinect version was using a structured light approach, one would expect various differences in the characteristics of the range data delivered by both devices. This paper presents a detailed and in-depth comparison between both devices. In order to conduct the comparison, we propose a framework of seven different experimental setups, which is a generic basis for evaluating range cameras such as Kinect. The experiments have been designed with the goal to capture individual effects of the Kinect devices as isolatedly as possible and in a way, that they can also be adopted, in order to apply them to any other range sensing device. The overall goal of this paper is to provide a solid insight into the pros and cons of either device. Thus, scientists that are interested in using Kinect range sensing cameras in their specific application scenario can directly assess the expected, specific benefits and potential problem of either device.Comment: 58 pages, 23 figures. Accepted for publication in Computer Vision and Image Understanding (CVIU

    Agile wavefront splitting interferometry and imaging using a digital micromirror device

    Get PDF
    Since 1997, we have proposed and demonstrated the use of the Texas Instrument (TI) Digital Micromirror Device (DMD) for various non-display applications including optical switching and imaging. In 2009, we proposed the use of the DMD to realize wavefront splitting interferometers as well as a variety of imagers. Specifically, proposed were agile electronically programmable wavefront splitting interferometer designs using a Spatial Light Modulator (SLM) such as (a) a transmissive SLM, (b) a DMD SLM and (c) a Beamsplitter with a DMD SLM. The SLMs operates with on/off or digital state pixels, much like a black and white state optical window to control passage/reflection of incident light. SLM pixel locations can be spatially and temporally modulated to create custom wavefronts for near-common path optical interference at the optical detectors such as a CCD/CMOS sensor, a Focal Plane Array (FPA) sensor or a point-photodetector. This paper describes the proposed DMD-based wavefront splitting interferometer and imager designs and their relevant experimental results

    IMPROVE: collaborative design review in mobile mixed reality

    Get PDF
    In this paper we introduce an innovative application designed to make collaborative design review in the architectural and automotive domain more effective. For this purpose we present a system architecture which combines variety of visualization displays such as high resolution multi-tile displays, TabletPCs and head-mounted displays with innovative 2D and 3D Interaction Paradigms to better support collaborative mobile mixed reality design reviews. Our research and development is motivated by two use scenarios: automotive and architectural design review involving real users from Page\Park architects and FIAT Elasis. Our activities are supported by the EU IST project IMPROVE aimed at developing advanced display techniques, fostering activities in the areas of: optical see-through HMD development using unique OLED technology, marker-less optical tracking, mixed reality rendering, image calibration for large tiled displays, collaborative tablet-based and projection wall oriented interaction and stereoscopic video streaming for mobile users. The paper gives an overview of the hardware and software developments within IMPROVE and concludes with results from first user tests

    Fast Obstacle Distance Estimation using Laser Line Imaging Technique for Smart Wheelchair

    Get PDF
    This paper presents an approach of obstacle distance estimation for smart wheelchair. A smart wheelchair was equipped with a camera and a laser line. The camera was used to capture an image from the environment in order to sense the pathway condition. The laser line was used in combination with camera to recognize an obstacle in the pathway based on the shape of laser line image in certain angle. A blob method detection was then applied on the laser line image to separate and recognize the pattern of the detected obstacles. The laser line projector and camera which was mounted in fixed-certain position ensured a fixed relation between blobs-gap and obstacle-to-wheelchair distance. A simple linear regression from 16 obtained data was used to respresent this relation as the estimated obstacle distance. As a result, the average error between the estimation and the actual distance was 1.25 cm from 7 data testing experiments. Therefore, the experiment results show that the proposed method was able to estimate the distance between wheelchair and the obstacle
    • …
    corecore