41 research outputs found
Learning to Dehaze from Realistic Scene with A Fast Physics-based Dehazing Network
Dehazing is a popular computer vision topic for long. A real-time dehazing
method with reliable performance is highly desired for many applications such
as autonomous driving. While recent learning-based methods require datasets
containing pairs of hazy images and clean ground truth references, it is
generally impossible to capture accurate ground truth in real scenes. Many
existing works compromise this difficulty to generate hazy images by rendering
the haze from depth on common RGBD datasets using the haze imaging model.
However, there is still a gap between the synthetic datasets and real hazy
images as large datasets with high-quality depth are mostly indoor and depth
maps for outdoor are imprecise. In this paper, we complement the existing
datasets with a new, large, and diverse dehazing dataset containing real
outdoor scenes from High-Definition (HD) 3D movies. We select a large number of
high-quality frames of real outdoor scenes and render haze on them using depth
from stereo. Our dataset is more realistic than existing ones and we
demonstrate that using this dataset greatly improves the dehazing performance
on real scenes. In addition to the dataset, we also propose a light and
reliable dehazing network inspired by the physics model. Our approach
outperforms other methods by a large margin and becomes the new
state-of-the-art method. Moreover, the light-weight design of the network
enables our method to run at a real-time speed, which is much faster than other
baseline methods
GTAV-NightRain: Photometric Realistic Large-scale Dataset for Night-time Rain Streak Removal
Rain is transparent, which reflects and refracts light in the scene to the
camera. In outdoor vision, rain, especially rain streaks degrade visibility and
therefore need to be removed. In existing rain streak removal datasets,
although density, scale, direction and intensity have been considered,
transparency is not fully taken into account. This problem is particularly
serious in night scenes, where the appearance of rain largely depends on the
interaction with scene illuminations and changes drastically on different
positions within the image. This is problematic, because unrealistic dataset
causes serious domain bias. In this paper, we propose GTAV-NightRain dataset,
which is a large-scale synthetic night-time rain streak removal dataset. Unlike
existing datasets, by using 3D computer graphic platform (namely GTA V), we are
allowed to infer the three dimensional interaction between rain and
illuminations, which insures the photometric realness. Current release of the
dataset contains 12,860 HD rainy images and 1,286 corresponding HD ground truth
images in diversified night scenes. A systematic benchmark and analysis are
provided along with the dataset to inspire further research
Atlantis: Enabling Underwater Depth Estimation with Stable Diffusion
Monocular depth estimation has experienced significant progress on
terrestrial images in recent years, largely due to deep learning advancements.
However, it remains inadequate for underwater scenes, primarily because of data
scarcity. Given the inherent challenges of light attenuation and backscattering
in water, acquiring clear underwater images or precise depth information is
notably difficult and costly. Consequently, learning-based approaches often
rely on synthetic data or turn to unsupervised or self-supervised methods to
mitigate this lack of data. Nonetheless, the performance of these methods is
often constrained by the domain gap and looser constraints. In this paper, we
propose a novel pipeline for generating photorealistic underwater images using
accurate terrestrial depth data. This approach facilitates the training of
supervised models for underwater depth estimation, effectively reducing the
performance disparity between terrestrial and underwater environments. Contrary
to prior synthetic datasets that merely apply style transfer to terrestrial
images without altering the scene content, our approach uniquely creates
vibrant, non-existent underwater scenes by leveraging terrestrial depth data
through the innovative Stable Diffusion model. Specifically, we introduce a
unique Depth2Underwater ControlNet, trained on specially prepared \{Underwater,
Depth, Text\} data triplets, for this generation task. Our newly developed
dataset enables terrestrial depth estimation models to achieve considerable
improvements, both quantitatively and qualitatively, on unseen underwater
images, surpassing their terrestrial pre-trained counterparts. Moreover, the
enhanced depth accuracy for underwater scenes also aids underwater image
restoration techniques that rely on depth maps, further demonstrating our
dataset's utility. The dataset will be available at
https://github.com/zkawfanx/Atlantis.Comment: 10 page
Haze visibility enhancement: A Survey and quantitative benchmarking
This paper provides a comprehensive survey of methods dealing with visibility enhancement of images taken in hazy or foggy scenes. The survey begins with discussing the optical models of atmospheric scattering media and image formation. This is followed by a survey of existing methods, which are categorized into: multiple image methods, polarizing filter-based methods, methods with known depth, and single-image methods. We also provide a benchmark of a number of well-known single-image methods, based on a recent dataset provided by Fattal (2014) and our newly generated scattering media dataset that contains ground truth images for quantitative evaluation. To our knowledge, this is the first benchmark using numerical metrics to evaluate dehazing techniques. This benchmark allows us to objectively compare the results of existing methods and to better identify the strengths and limitations of each method.This study is supported by an Nvidia GPU Grant and a Canadian NSERC Discovery grant. R. T. Tan’s work in this research is supported by the National Research Foundation, Prime Ministers Office, Singapore under its International Research Centre in Singapore Funding Initiativ
Comparison of Cumulative Live Birth Rates Between GnRH-A and PPOS in Low-Prognosis Patients According to POSEIDON Criteria: A Cohort Study
ObjectiveTo compare the cumulative live birth rate (CLBR) of a gonadotropin-releasing hormone (GnRH) antagonist regimen and a progestin-primed ovarian stimulation (PPOS) regimen in low-prognosis patients according to POSEIDON criteria.DesignSingle-center, retrospective, observational study.SettingHenan Provincial People’s Hospital, Zhengzhou, ChinaPatientsWomen aged ≤40 years, with a body mass index <25 kg/m2, who underwent in vitro fertilization (IVF) or intracytoplasmic sperm microinjection (ICSI) and met POSEIDON low-prognosis criteria.InterventionGnRH or PPOS regimen with IVF or ICSI.Main Outcome MeasureCLBR per oocyte retrieval cycle.ResultsPer oocyte retrieval cycle, CLBR was significantly higher with GnRH antagonist versus PPOS (35.3% vs 25.2%; P<0.001). In multivariable logistic regression analysis, CLBR per oocyte retrieval cycle was significantly lower with PPOS versus GnRH antagonist before (OR 0.62 [95% confidence intervals (CI): 0.46, 0.82; P=0.009]) and after (OR 0.66 [95% CI: 0.47, 0.93; P=0.0172]) adjustment for age, body mass index, infertility type, infertility duration, baseline follicle stimulating hormone, anti-Müllerian hormone (AMH), antral follicle count (AFC), and insemination method. CLBR was numerically higher with the GnRH antagonist regimen than with PPOS, across all of the POSEIDON groups, and was significantly higher in patients aged ≥35 years with poor ovarian reserve [AFC <5, AMH <1.2 ng/mL] (unadjusted, P=0.0108; adjusted, P=0.0243).ConclusionIn this single-center, retrospective, cohort study, patients had a higher CLBR with a GnRH antagonist versus PPOS regimen, regardless of other attributes