1,088 research outputs found

    Titan imagery with Keck adaptive optics during and after probe entry

    Get PDF
    We present adaptive optics data from the Keck telescope, taken while the Huygens probe descended through Titan's atmosphere and on the days following touchdown. No probe entry signal was detected. Our observations span a solar phase angle range from 0.05° up to 0.8°, with the Sun in the west. Contrary to expectations, the east side of Titan's stratosphere was usually brightest. Compiling images obtained with Keck and Gemini over the past few years reveals that the east-west asymmetry can be explained by a combination of the solar phase angle effect and an enhancement in the haze density on Titan's morning hemisphere. While stratospheric haze was prominent over the northern hemisphere, tropospheric haze dominated the south, from the south pole up to latitudes of ∼45°S. At 2.1 μm this haze forms a polar cap, while at 1.22 μm it appears in the form of a collar at 60°S. A few small clouds were usually present near the south pole, at altitudes of 30–40 km. Our narrowband J,H,K images of Titan's surface compare extremely well with that obtained by Cassini ISS, down to the small-scale features. The surface contrast between dark and bright areas may be larger at 2 μm than at 1.6 and 1.3 μm, which would imply that the dark areas may be covered by a coarser-grained frost than the bright regions and/or that there is additional 2 μm absorption there

    Mapping and Deep Analysis of Image Dehazing: Coherent Taxonomy, Datasets, Open Challenges, Motivations, and Recommendations

    Get PDF
    Our study aims to review and analyze the most relevant studies in the image dehazing field. Many aspects have been deemed necessary to provide a broad understanding of various studies that have been examined through surveying the existing literature. These aspects are as follows: datasets that have been used in the literature, challenges that other researchers have faced, motivations, and recommendations for diminishing the obstacles in the reported literature. A systematic protocol is employed to search all relevant articles on image dehazing, with variations in keywords, in addition to searching for evaluation and benchmark studies. The search process is established on three online databases, namely, IEEE Xplore, Web of Science (WOS), and ScienceDirect (SD), from 2008 to 2021. These indices are selected because they are sufficient in terms of coverage. Along with definition of the inclusion and exclusion criteria, we include 152 articles to the final set. A total of 55 out of 152 articles focused on various studies that conducted image dehazing, and 13 out 152 studies covered most of the review papers based on scenarios and general overviews. Finally, most of the included articles centered on the development of image dehazing algorithms based on real-time scenario (84/152) articles. Image dehazing removes unwanted visual effects and is often considered an image enhancement technique, which requires a fully automated algorithm to work under real-time outdoor applications, a reliable evaluation method, and datasets based on different weather conditions. Many relevant studies have been conducted to meet these critical requirements. We conducted objective image quality assessment experimental comparison of various image dehazing algorithms. In conclusions unlike other review papers, our study distinctly reflects different observations on image dehazing areas. We believe that the result of this study can serve as a useful guideline for practitioners who are looking for a comprehensive view on image dehazing

    Adaptive Deep Learning Detection Model for Multi-Foggy Images

    Get PDF
    The fog has different features and effects within every single environment. Detection whether there is fog in the image is considered a challenge and giving the type of fog has a substantial enlightening effect on image defogging. Foggy scenes have different types such as scenes based on fog density level and scenes based on fog type. Machine learning techniques have a significant contribution to the detection of foggy scenes. However, most of the existing detection models are based on traditional machine learning models, and only a few studies have adopted deep learning models. Furthermore, most of the existing machines learning detection models are based on fog density-level scenes. However, to the best of our knowledge, there is no such detection model based on multi-fog type scenes have presented yet. Therefore, the main goal of our study is to propose an adaptive deep learning model for the detection of multi-fog types of images. Moreover, due to the lack of a publicly available dataset for inhomogeneous, homogenous, dark, and sky foggy scenes, a dataset for multi-fog scenes is presented in this study (https://github.com/Karrar-H-Abdulkareem/Multi-Fog-Dataset). Experiments were conducted in three stages. First, the data collection phase is based on eight resources to obtain the multi-fog scene dataset. Second, a classification experiment is conducted based on the ResNet-50 deep learning model to obtain detection results. Third, evaluation phase where the performance of the ResNet-50 detection model has been compared against three different models. Experimental results show that the proposed model has presented a stable classification performance for different foggy images with a 96% score for each of Classification Accuracy Rate (CAR), Recall, Precision, F1-Score which has specific theoretical and practical significance. Our proposed model is suitable as a pre-processing step and might be considered in different real-time applications

    Fast single image defogging with robust sky detection

    Get PDF
    Haze is a source of unreliability for computer vision applications in outdoor scenarios, and it is usually caused by atmospheric conditions. The Dark Channel Prior (DCP) has shown remarkable results in image defogging with three main limitations: 1) high time-consumption, 2) artifact generation, and 3) sky-region over-saturation. Therefore, current work has focused on improving processing time without losing restoration quality and avoiding image artifacts during image defogging. Hence in this research, a novel methodology based on depth approximations through DCP, local Shannon entropy, and Fast Guided Filter is proposed for reducing artifacts and improving image recovery on sky regions with low computation time. The proposed-method performance is assessed using more than 500 images from three datasets: Hybrid Subjective Testing Set from Realistic Single Image Dehazing (HSTS-RESIDE), the Synthetic Objective Testing Set from RESIDE (SOTS-RESIDE) and the HazeRD. Experimental results demonstrate that the proposed approach has an outstanding performance over state-of-the-art methods in reviewed literature, which is validated qualitatively and quantitatively through Peak Signal-to-Noise Ratio (PSNR), Naturalness Image Quality Evaluator (NIQE) and Structural SIMilarity (SSIM) index on retrieved images, considering different visual ranges, under distinct illumination and contrast conditions. Analyzing images with various resolutions, the method proposed in this work shows the lowest processing time under similar software and hardware conditions.This work was supported in part by the Centro en Investigaciones en Óptica (CIO) and the Consejo Nacional de Ciencia y Tecnología (CONACYT), and in part by the Barcelona Supercomputing Center.Peer ReviewedPostprint (published version

    Development of cloud removal and land cover Change extraction algorithms for remotely-sensed Landsat imagery

    Get PDF
    Land cover change monitoring requires the analysis of remotely-sensed data. In the tropics this is difficult because of persistent cloud cover, and data availability. This research focuses on the elimination of cloud cover as an important step towards addressing the issue of change detection. The result produced clearer images, whereas some persistent cloud remains. This persistent cloud and the cloud adjacency effects diminish the quality of image product and affect the change detection quality

    Real-time image dehazing by superpixels segmentation and guidance filter

    Get PDF
    Haze and fog had a great influence on the quality of images, and to eliminate this, dehazing and defogging are applied. For this purpose, an effective and automatic dehazing method is proposed. To dehaze a hazy image, we need to estimate two important parameters such as atmospheric light and transmission map. For atmospheric light estimation, the superpixels segmentation method is used to segment the input image. Then each superpixel intensities are summed and further compared with each superpixel individually to extract the maximum intense superpixel. Extracting the maximum intense superpixel from the outdoor hazy image automatically selects the hazy region (atmospheric light). Thus, we considered the individual channel intensities of the extracted maximum intense superpixel as an atmospheric light for our proposed algorithm. Secondly, on the basis of measured atmospheric light, an initial transmission map is estimated. The transmission map is further refined through a rolling guidance filter that preserves much of the image information such as textures, structures and edges in the final dehazed output. Finally, the haze-free image is produced by integrating the atmospheric light and refined transmission with the haze imaging model. Through detailed experimentation on several publicly available datasets, we showed that the proposed model achieved higher accuracy and can restore high-quality dehazed images as compared to the state-of-the-art models. The proposed model could be deployed as a real-time application for real-time image processing, real-time remote sensing images, real-time underwater images enhancement, video-guided transportation, outdoor surveillance, and auto-driver backed systems

    Information extraction techniques for multispectral scanner data

    Get PDF
    The applicability of recognition-processing procedures for multispectral scanner data from areas and conditions used for programming the recognition computers to other data from different areas viewed under different measurement conditions was studied. The reflective spectral region approximately 0.3 to 3.0 micrometers is considered. A potential application of such techniques is in conducting area surveys. Work in three general areas is reported: (1) Nature of sources of systematic variation in multispectral scanner radiation signals, (2) An investigation of various techniques for overcoming systematic variations in scanner data; (3) The use of decision rules based upon empirical distributions of scanner signals rather than upon the usually assumed multivariate normal (Gaussian) signal distributions

    Computer vision applied to underwater robotics

    Get PDF
    corecore