577 research outputs found

    Algorithms, Protocols & Systems for Remote Observation Using Networked Robotic Cameras

    Get PDF
    Emerging advances in robotic cameras, long-range wireless networking, and distributed sensors make feasible a new class of hybrid teleoperated/autonomous robotic remote "observatories" that can allow groups of peoples, via the Internet, to observe, record, and index detailed activity occurred in remote site. Equipped with robotic pan-tilt actuation mechanisms and a high-zoom lens, the camera can cover a large region with very high spatial resolution and allows for observation at a distance. High resolution motion panorama is the most nature data representation. We develop algorithms and protocols for high resolution motion panorama. We discover and prove the projection invariance and achieve real time image alignment. We propose a minimum variance based incremental frame alignment algorithm to minimize the accumulation of alignment error in incremental image alignment and ensure the quality of the panorama video over the long run. We propose a Frame Graph based panorama documentation algorithm to manage the large scale data involved in the online panorama video documentation. We propose a on-demand high resolution panorama video-streaming system that allows on-demand sharing of a high-resolution motion panorama and efficiently deals with multiple concurrent spatial-temporal user requests. In conclusion, our research work on high resolution motion panorama have significantly improve the efficiency and accuracy of image alignment, panorama video quality, data organization, and data storage and retrieving in remote observation using networked robotic cameras

    Computer-Based Stereoscopic Parts Recognition for Robotic Applications

    Full text link
    Most of robotic handling and assembly operations are based on sensors such as range and touch sensors. In certain circumstances, such as in the presence of ionizing radiation where most customary sensors will degrade over time due to radiation exposure, these sensors won\u27t function properly. Utilizing two or more cameras (stereo vision) located outside the target zone and analyzing their images to identify location and dimensions of parts within the robot workspace is an alternative for using sensors. Object Recognition is affected by the light condition which oftentimes causes the gray-scale or red, green, and blue values to have a relatively small dynamic range. With this small dynamic range, edge detection algorithms fail to detect the proper edges and therefore cause improper image segmentation. To tackle this problem, a transformation on the (r,g,b) values of the pixels is introduced and applied prior to the edge detection and segmentation process. A stereoscopic computer vision system with multiple cameras is then used to compute the distance of the object from the origin of a global Euclidean coordinate system with high resolution. As an application of computer vision, a classifier for testing remote solar panels for cleanness condition, and performing cleaning when necessary, is introduced. A classification algorithm consisting of: the classification vector, the metric used, the training of the classifier, the testing of the classifier, and the classifier is put into play for everyday use. A smart cleaning robot is being designed based on this system to perform the cleaning autonomously when necessary. Another application of computer vision is inspecting the degree of air pollution. A real time classification algorithm that uses a quantization algorithm based on prior calibration is applied to evaluate the quality of air. The intelligent system, based on this algorithm, classifies the air using a numeric system from 1 to 10 which is then transformed to a qualitative scale

    Mobile robot teleoperation through eye-gaze (telegaze)

    Get PDF
    In most teleoperation applications the human operator is required to monitor the status of the robot, as well as, issue controlling commands for the whole duration of the operation. Using a vision based feedback system, monitoring the robot requires the operator to look at a continuous stream of images displayed on an interaction screen. The eyes of the operator therefore, are fully engaged in monitoring and the hands in controlling. Since the eyes of the operator are engaged in monitoring anyway, inputs from their gaze can be used to aid in controlling. This frees the hands of the operator, either partially or fully, from controlling which can then be used to perform any other necessary tasks. However, the challenge here lies in distinguishing between the inputs that can be used for controlling and the inputs that can be used for monitoring. In mobile robot teleoperation, controlling is mainly composed of issuing locomotion commands to drive the robot. Monitoring on the other hand, is looking where the robot goes and looking for any obstacles in the route. Interestingly, there exist a strong correlation between human's gazing behaviours and their moving intentions. This correlation has been exploited in this thesis to investigate novel means for mobile robot teleoperation through eye-gaze, which has been named TeleGaze for short

    Autonomous Science For Future Planetary Exploration Operations

    Get PDF

    Attention-controlled acquisition of a qualitative scene model for mobile robots

    Get PDF
    Haasch A. Attention-controlled acquisition of a qualitative scene model for mobile robots. Bielefeld (Germany): Bielefeld University; 2007.Robots that are used to support humans in dangerous environments, e.g., in manufacture facilities, are established for decades. Now, a new generation of service robots is focus of current research and about to be introduced. These intelligent service robots are intended to support humans in everyday life. To achieve a most comfortable human-robot interaction with non-expert users it is, thus, imperative for the acceptance of such robots to provide interaction interfaces that we humans are accustomed to in comparison to human-human communication. Consequently, intuitive modalities like gestures or spontaneous speech are needed to teach the robot previously unknown objects and locations. Then, the robot can be entrusted with tasks like fetch-and-carry orders even without an extensive training of the user. In this context, this dissertation introduces the multimodal Object Attention System which offers a flexible integration of common interaction modalities in combination with state-of-the-art image and speech processing techniques from other research projects. To prove the feasibility of the approach the presented Object Attention System has successfully been integrated in different robotic hardware. In particular, the mobile robot BIRON and the anthropomorphic robot BARTHOC of the Applied Computer Science Group at Bielefeld University. Concluding, the aim of this work, to acquire a qualitative Scene Model by a modular component offering object attention mechanisms, has been successfully achieved as demonstrated on numerous occasions like reviews for the EU-integrated Project COGNIRON or demos
    • …
    corecore