9 research outputs found

    Simulating underwater depth environment condition using lighting system design

    Get PDF
    The major obstacle faced by the underwater environment system is the extreme loss of color and contrast when submerged to any significant depth whereby the image quality produced is low. The studies can be easily done by developing the prototype that may imitate the underwater environment. In order to develop the prototype, suitable lighting system are used where it act as an imitator for underwater environment with different depth. Next the color option that use for the imitator prototype should be suitable for underwater lighting system. By using both suitable lighting system and color option for the system, this prototype might be able to produce image that can be comparable with the actual environment. The water tank is the best choice as the medium for imitating and it’s attached with the red curtain in order to create the environment without any unwanted lighting source. The underwater flood light is use for the lighting system and creates the scenery of the lighting underwater environment. The brightness of the light can be adjustable by adjusting the input voltage. In order to capture and record the image of the imitated underwater, the underwater camera and recordable receiver display is used. Lastly, since the underwater environment has noise the automatic pump is applied to create the ambient noise. The result shows that the appropriate combination of color and the brightness based on different depth it may produce the precise hue and saturation with the actual environment system

    A COMPARISON BETWEEN ACTIVE AND PASSIVE TECHNIQUES FOR UNDERWATER 3D APPLICATIONS

    Get PDF

    ASSESSMENT OF CHROMATIC ABERRATIONS FOR GOPRO 3 CAMERAS IN UNDERWATER ENVIRONMENTS

    Get PDF
    With underwater photogrammetric mapping becoming more prominent due to the lower costs for waterproof cameras as well as lower costs for underwater platforms, the aim of this research is to investigate chromatic aberration in underwater environments. Chromatic aberration in in-air applications is to be known to systematically influence the observations of up to a few pixels. In order to achieve pixel-level positioning accuracy, this systematic influence needs further investigation. However, while chromatic aberration studies have been performed for in-air environments, there is a lack of research to quantify the influence of chromatic aberration in underwater environments. Using images captured in a water tank from three different GoPro cameras in five datasets, we investigate possible chromatic aberration by running two different adjustments on the extracted red (R), green (G) and blue (B) bands. The first adjustment is an adjustment that calculates the interior orientation parameters for each set of images independently in a free network adjustment. The second adjustment solves for all interior orientation parameters (for R, G, and B channels) in a combined adjustment per camera, constraining the point observations in object space. We were able to quantify significant chromatic aberrations in our evaluation, with the largest aberrations observed for red band followed by green and blue

    Towards Intelligent Telerobotics: Visualization and Control of Remote Robot

    Get PDF
    Human-machine cooperative or co-robotics has been recognized as the next generation of robotics. In contrast to current systems that use limited-reasoning strategies or address problems in narrow contexts, new co-robot systems will be characterized by their flexibility, resourcefulness, varied modeling or reasoning approaches, and use of real-world data in real time, demonstrating a level of intelligence and adaptability seen in humans and animals. The research I focused is in the two sub-field of co-robotics: teleoperation and telepresence. We firstly explore the ways of teleoperation using mixed reality techniques. I proposed a new type of display: hybrid-reality display (HRD) system, which utilizes commodity projection device to project captured video frame onto 3D replica of the actual target surface. It provides a direct alignment between the frame of reference for the human subject and that of the displayed image. The advantage of this approach lies in the fact that no wearing device needed for the users, providing minimal intrusiveness and accommodating users eyes during focusing. The field-of-view is also significantly increased. From a user-centered design standpoint, the HRD is motivated by teleoperation accidents, incidents, and user research in military reconnaissance etc. Teleoperation in these environments is compromised by the Keyhole Effect, which results from the limited field of view of reference. The technique contribution of the proposed HRD system is the multi-system calibration which mainly involves motion sensor, projector, cameras and robotic arm. Due to the purpose of the system, the accuracy of calibration should also be restricted within millimeter level. The followed up research of HRD is focused on high accuracy 3D reconstruction of the replica via commodity devices for better alignment of video frame. Conventional 3D scanner lacks either depth resolution or be very expensive. We proposed a structured light scanning based 3D sensing system with accuracy within 1 millimeter while robust to global illumination and surface reflection. Extensive user study prove the performance of our proposed algorithm. In order to compensate the unsynchronization between the local station and remote station due to latency introduced during data sensing and communication, 1-step-ahead predictive control algorithm is presented. The latency between human control and robot movement can be formulated as a linear equation group with a smooth coefficient ranging from 0 to 1. This predictive control algorithm can be further formulated by optimizing a cost function. We then explore the aspect of telepresence. Many hardware designs have been developed to allow a camera to be placed optically directly behind the screen. The purpose of such setups is to enable two-way video teleconferencing that maintains eye-contact. However, the image from the see-through camera usually exhibits a number of imaging artifacts such as low signal to noise ratio, incorrect color balance, and lost of details. Thus we develop a novel image enhancement framework that utilizes an auxiliary color+depth camera that is mounted on the side of the screen. By fusing the information from both cameras, we are able to significantly improve the quality of the see-through image. Experimental results have demonstrated that our fusion method compares favorably against traditional image enhancement/warping methods that uses only a single image

    Computational strategies for understanding underwater optical image datasets

    Get PDF
    Thesis: Ph. D. in Mechanical and Oceanographic Engineering, Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Mechanical Engineering; and the Woods Hole Oceanographic Institution), 2013.Cataloged from PDF version of thesis.Includes bibliographical references (pages 117-135).A fundamental problem in autonomous underwater robotics is the high latency between the capture of image data and the time at which operators are able to gain a visual understanding of the survey environment. Typical missions can generate imagery at rates hundreds of times greater than highly compressed images can be transmitted acoustically, delaying that understanding until after the vehicle has been recovered and the data analyzed. While automated classification algorithms can lessen the burden on human annotators after a mission, most are too computationally expensive or lack the robustness to run in situ on a vehicle. Fast algorithms designed for mission-time performance could lessen the latency of understanding by producing low-bandwidth semantic maps of the survey area that can then be telemetered back to operators during a mission. This thesis presents a lightweight framework for processing imagery in real time aboard a robotic vehicle. We begin with a review of pre-processing techniques for correcting illumination and attenuation artifacts in underwater images, presenting our own approach based on multi-sensor fusion and a strong physical model. Next, we construct a novel image pyramid structure that can reduce the complexity necessary to compute features across multiple scales by an order of magnitude and recommend features which are fast to compute and invariant to underwater artifacts. Finally, we implement our framework on real underwater datasets and demonstrate how it can be used to select summary images for the purpose of creating low-bandwidth semantic maps capable of being transmitted acoustically.by Jeffrey W. Kaeli.Ph. D. in Mechanical and Oceanographic Engineerin

    On controlling light transport in poor visibility environments

    No full text
    Poor visibility conditions due to murky water, bad weather, dust and smoke severely impede the performance of vision systems. Passive methods have been used to restore scene contrast under moderate visibility by digital postprocessing. However, these methods are ineffective when the quality of acquired images is poor to begin with. In this work, we design active lighting and sensing systems for controlling light transport before image formation, and hence obtain higher quality data. First, we present a technique of polarized light striping based on combining polarization imaging and structured light striping. We show that this technique out-performs different existing illumination and sensing methodologies. Second, we present a numerical approach for computing the optimal relative sensor-source position, which results in the best quality image. Our analysis accounts for the limits imposed by sensor noise. 1
    corecore