206,739 research outputs found

    Uniform Distorted Scene Reduction on Distribution of Colour Cast Correction

    Get PDF
    Scene in the photo occulated by uniform particles distribution can degrade the image quality accidently. State of the art pre-processing methods are able to enhance visibility by employing local and global filters on the image scene. Regardless of air light and transmission map right estimation, those methods unfortunately produce artifacts and halo effects because of uncorrelated problem between the global and local filter’s windows. Besides, previous approaches might abruptly eliminate the primary scene structure of an image like texture and colour. Therefore, this study aims not solely to improve scene image quality via a recovery method but also to overcome image content issues such as the artefacts and halo effects, and finally to reduce the light disturbance in the scene image. We introduce our proposed visibility enhancement method by using joint ambience distribution that improves the colour cast in the image. Furthermore, the method is able to balance the atmospheric light in correspondence to the depth map accordingly. Consequently, our method maintains the image texture structural information by calculating the lighting estimation and maintaining a range of colours simultaneously. The method is tested on images from the Benchmarking Single Image Dehazing research by assessing their clear edge ratio, gradient, range of saturated pixels, and structural similarity metric index. The scene image restoration assessment results show that our proposed method had outperformed resuls from the Tan, Tarel and He methods by gaining the highest score in the structural similarity index and colourfulness measurement. Furthermore, our proposed method also had achieved acceptable gradient ratio and percentage of the number of saturated pixels. The proposed approach enhances the visibility in the images without affecting them structurally

    3D-PL: Domain Adaptive Depth Estimation with 3D-aware Pseudo-Labeling

    Full text link
    For monocular depth estimation, acquiring ground truths for real data is not easy, and thus domain adaptation methods are commonly adopted using the supervised synthetic data. However, this may still incur a large domain gap due to the lack of supervision from the real data. In this paper, we develop a domain adaptation framework via generating reliable pseudo ground truths of depth from real data to provide direct supervisions. Specifically, we propose two mechanisms for pseudo-labeling: 1) 2D-based pseudo-labels via measuring the consistency of depth predictions when images are with the same content but different styles; 2) 3D-aware pseudo-labels via a point cloud completion network that learns to complete the depth values in the 3D space, thus providing more structural information in a scene to refine and generate more reliable pseudo-labels. In experiments, we show that our pseudo-labeling methods improve depth estimation in various settings, including the usage of stereo pairs during training. Furthermore, the proposed method performs favorably against several state-of-the-art unsupervised domain adaptation approaches in real-world datasets.Comment: Accepted in ECCV 2022. Project page: https://ccc870206.github.io/3D-PL

    Temporal Interpolation via Motion Field Prediction

    Full text link
    Navigated 2D multi-slice dynamic Magnetic Resonance (MR) imaging enables high contrast 4D MR imaging during free breathing and provides in-vivo observations for treatment planning and guidance. Navigator slices are vital for retrospective stacking of 2D data slices in this method. However, they also prolong the acquisition sessions. Temporal interpolation of navigator slices an be used to reduce the number of navigator acquisitions without degrading specificity in stacking. In this work, we propose a convolutional neural network (CNN) based method for temporal interpolation via motion field prediction. The proposed formulation incorporates the prior knowledge that a motion field underlies changes in the image intensities over time. Previous approaches that interpolate directly in the intensity space are prone to produce blurry images or even remove structures in the images. Our method avoids such problems and faithfully preserves the information in the image. Further, an important advantage of our formulation is that it provides an unsupervised estimation of bi-directional motion fields. We show that these motion fields can be used to halve the number of registrations required during 4D reconstruction, thus substantially reducing the reconstruction time.Comment: Submitted to 1st Conference on Medical Imaging with Deep Learning (MIDL 2018), Amsterdam, The Netherland
    • …
    corecore