74 research outputs found

    Restoring Vision in Hazy Weather with Hierarchical Contrastive Learning

    Full text link
    Image restoration under hazy weather condition, which is called single image dehazing, has been of significant interest for various computer vision applications. In recent years, deep learning-based methods have achieved success. However, existing image dehazing methods typically neglect the hierarchy of features in the neural network and fail to exploit their relationships fully. To this end, we propose an effective image dehazing method named Hierarchical Contrastive Dehazing (HCD), which is based on feature fusion and contrastive learning strategies. HCD consists of a hierarchical dehazing network (HDN) and a novel hierarchical contrastive loss (HCL). Specifically, the core design in the HDN is a hierarchical interaction module, which utilizes multi-scale activation to revise the feature responses hierarchically. To cooperate with the training of HDN, we propose HCL which performs contrastive learning on hierarchically paired exemplars, facilitating haze removal. Extensive experiments on public datasets, RESIDE, HazeRD, and DENSE-HAZE, demonstrate that HCD quantitatively outperforms the state-of-the-art methods in terms of PSNR, SSIM and achieves better visual quality.Comment: 30 pages, 10 figure

    Polarization-entangled quantum frequency comb

    Full text link
    Integrated micro-resonator facilitates the realization of quantum frequency comb (QFC), which provides a large number of discrete frequency modes with broadband spectral range and narrow linewidth. However, all previous demonstrations have focused on the generation of energy-time or time-bin entangled photons from QFC. Realizing polarization-entangled quantum frequency comb, which is the important resource for fundamental study of quantum mechanics and quantum information applications, remains challenging. Here, we demonstrate, for the first time, a broadband polarization-entangled quantum frequency comb by combining an integrated silicon nitride micro-resonator with a Sagnac interferometer. With a free spectral range of about 99 GHz and a narrow linewidth of about 190 MHz, our source provides 22 polarization entangled photons pairs with frequency covering the whole telecom C-band. The entanglement fidelities for all 22 pairs are above 81%, including 17 pairs with fidelities higher than 90%. Our demonstration paves the way for employing the polarization-entangled quantum frequency comb in quantum network using CMOS technology as well as standard dense wavelength division multiplexing technology.Comment: 11 pages, 9 figure

    Selecting a semantic similarity measure for concepts in two different CAD model data ontologies

    Get PDF
    Semantic similarity measure technology based approach is one of the most popular approaches aiming at implementing semantic mapping between two different CAD model data ontologies. The most important problem in this approach is how to measure the semantic similarities of concepts between two different ontologies. A number of measure methods focusing on this problem have been presented in recent years. Each method can work well between its specific ontologies. But it is unclear how accurate the measured semantic similarities in these methods are. Moreover, there is yet no evidence that any of the methods presented how to select a measure with high similarity calculation accuracy. To compensate for such deficiencies, this paper proposes a method for selecting a semantic similarity measure with high similarity calculation accuracy for concepts in two different CAD model data ontologies. In this method, the similarity calculation accuracy of each candidate measure is quantified using Pearson correlation coefficient or residual sum of squares. The measure with high similarity calculation accuracy is selected through a comparison of the Pearson correlation coefficients or the residual sums of squares of all candidate measures. The paper also reports an implementation of the proposed method, provides an example to show how the method works, and evaluates the method by theoretical and experimental comparisons. The evaluation result suggests that the measure selected by the proposed method has good human correlation and high similarity calculation accuracy

    Efficient Raman lasing and Raman-Kerr interaction in an integrated silicon carbide platform

    Full text link
    Implementing stimulated Raman scattering in a low-loss microresonator could lead to Raman lasing. Here, we report the demonstration of an efficient Raman laser with >50%>50 \% power efficiency in an integrated silicon carbide platform for the first time. By fine tuning the free spectral range (FSR) of 43-μ\mum-radius silicon carbide microresonators, the Stokes resonance corresponding to the dominant Raman shift of 777 cm1777\ \text{cm}^{-1} (23.323.3 THz) is aligned to the center of the Raman gain spectrum, resulting in a low power threshold of 2.52.5 mW. The peak Raman gain coefficient is estimated to be (0.75±0.15) cm/GW0.75 \pm 0.15) \ \text{cm}/\text{GW} in the 1550 nm band, with an approximate full width at half maximum of (120±30120 \pm 30) GHz. In addition, the microresonator is designed to exhibit normal dispersion at the pump wavelength near 1550 nm while possessing anomalous dispersion at the first Stokes near 1760 nm. At high enough input powers, a Kerr microcomb is generated by the Stokes signal acting as the secondary pump, which then mixes with the pump laser through four-wave mixing to attain a wider spectral coverage. Furthermore, cascaded Raman lasing and occurrence of multiple Raman shifts, including 204 cm1204\ \text{cm}^{-1} (6.16.1 THz) and 266 cm1266\ \text{cm}^{-1} (8.08.0 THz) transitions, are also observed. Finally, we show that the Stokes Raman could also help broaden the spectrum in a Kerr microcomb which has anomalous dispersion at the pump wavelength. Our example of a 100-GHz-FSR microcomb has a wavelength span from 1200 nm to 1900 nm with 300 mW on-chip power

    GridFormer: Residual Dense Transformer with Grid Structure for Image Restoration in Adverse Weather Conditions

    Full text link
    Image restoration in adverse weather conditions is a difficult task in computer vision. In this paper, we propose a novel transformer-based framework called GridFormer which serves as a backbone for image restoration under adverse weather conditions. GridFormer is designed in a grid structure using a residual dense transformer block, and it introduces two core designs. First, it uses an enhanced attention mechanism in the transformer layer. The mechanism includes stages of the sampler and compact self-attention to improve efficiency, and a local enhancement stage to strengthen local information. Second, we introduce a residual dense transformer block (RDTB) as the final GridFormer layer. This design further improves the network's ability to learn effective features from both preceding and current local features. The GridFormer framework achieves state-of-the-art results on five diverse image restoration tasks in adverse weather conditions, including image deraining, dehazing, deraining & dehazing, desnowing, and multi-weather restoration. The source code and pre-trained models will be released.Comment: 17 pages, 12 figure

    Deep Video Restoration for Under-Display Camera

    Full text link
    Images or videos captured by the Under-Display Camera (UDC) suffer from severe degradation, such as saturation degeneration and color shift. While restoration for UDC has been a critical task, existing works of UDC restoration focus only on images. UDC video restoration (UDC-VR) has not been explored in the community. In this work, we first propose a GAN-based generation pipeline to simulate the realistic UDC degradation process. With the pipeline, we build the first large-scale UDC video restoration dataset called PexelsUDC, which includes two subsets named PexelsUDC-T and PexelsUDC-P corresponding to different displays for UDC. Using the proposed dataset, we conduct extensive benchmark studies on existing video restoration methods and observe their limitations on the UDC-VR task. To this end, we propose a novel transformer-based baseline method that adaptively enhances degraded videos. The key components of the method are a spatial branch with local-aware transformers, a temporal branch embedded temporal transformers, and a spatial-temporal fusion module. These components drive the model to fully exploit spatial and temporal information for UDC-VR. Extensive experiments show that our method achieves state-of-the-art performance on PexelsUDC. The benchmark and the baseline method are expected to promote the progress of UDC-VR in the community, which will be made public

    A Survey of Deep Face Restoration: Denoise, Super-Resolution, Deblur, Artifact Removal

    Full text link
    Face Restoration (FR) aims to restore High-Quality (HQ) faces from Low-Quality (LQ) input images, which is a domain-specific image restoration problem in the low-level computer vision area. The early face restoration methods mainly use statistic priors and degradation models, which are difficult to meet the requirements of real-world applications in practice. In recent years, face restoration has witnessed great progress after stepping into the deep learning era. However, there are few works to study deep learning-based face restoration methods systematically. Thus, this paper comprehensively surveys recent advances in deep learning techniques for face restoration. Specifically, we first summarize different problem formulations and analyze the characteristic of the face image. Second, we discuss the challenges of face restoration. Concerning these challenges, we present a comprehensive review of existing FR methods, including prior based methods and deep learning-based methods. Then, we explore developed techniques in the task of FR covering network architectures, loss functions, and benchmark datasets. We also conduct a systematic benchmark evaluation on representative methods. Finally, we discuss future directions, including network designs, metrics, benchmark datasets, applications,etc. We also provide an open-source repository for all the discussed methods, which is available at https://github.com/TaoWangzj/Awesome-Face-Restoration.Comment: 21 pages, 19 figure

    Torsional fretting and torsional sliding wear behaviors of CuNiAl against 42CrMo4 under dry condition

    Get PDF
    Many wear failures are caused by a combination of fretting wear and sliding wear. In this study, the torsional fretting and torsional sliding wear properties of CuNiAl against 42CrMo4 were comparatively investigated under dry condition using a flat on flat contact tester. Experimental results showed that the sliding friction coefficients declined more dramatically than the fretting friction coefficients when the normal load increased. The fretting wear rate was lower than the sliding wear rate, which was partly due to the solid lubrication effect of the wear debris and strain hardening of the worn surfaces. The dominant wear mechanisms for the fretting tests were oxidation, cracks and delamination, while for the sliding tests were abrasion combined with plastic deformation
    corecore