58 research outputs found
High-quality reduced graphene oxide-nanocrystalline platinum hybrid materials prepared by simultaneous co-reduction of graphene oxide and chloroplatinic acid
Reduced graphene oxide-nanocrystalline platinum (RGO-Pt) hybrid materials were synthesized by simultaneous co-reduction of graphene oxide (GO) and chloroplatinic acid with sodium citrate in water at 80°C, of pH 7 and 10. The resultant RGO-Pt hybrid materials were characterized using transmission electron microscopy (TEM), powder X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), Fourier-transform infrared spectroscopy, and thermogravimetric analysis. Platinum (Pt) nanoparticles were anchored randomly onto the reduced GO (RGO) sheets with average mean diameters of 1.76 (pH 7) and 1.93 nm (pH 10). The significant Pt diffraction peaks and the decreased intensity of (002) peak in the XRD patterns of RGO-Pt hybrid materials confirmed that the Pt nanoparticles were anchored onto the RGO sheets and intercalated into the stacked RGO layers at these two pH values. The Pt loadings for the hybrid materials were determined as 36.83 (pH 7) and 49.18% (pH 10) by mass using XPS analysis. With the assistance of oleylamine, the resultant RGO-Pt hybrid materials were soluble in the nonpolar organic solvents, and the dispersion could remain stable for several months
Unsupervised Image Denoising in Real-World Scenarios via Self-Collaboration Parallel Generative Adversarial Branches
Deep learning methods have shown remarkable performance in image denoising,
particularly when trained on large-scale paired datasets. However, acquiring
such paired datasets for real-world scenarios poses a significant challenge.
Although unsupervised approaches based on generative adversarial networks offer
a promising solution for denoising without paired datasets, they are difficult
in surpassing the performance limitations of conventional GAN-based
unsupervised frameworks without significantly modifying existing structures or
increasing the computational complexity of denoisers. To address this problem,
we propose a SC strategy for multiple denoisers. This strategy can achieve
significant performance improvement without increasing the inference complexity
of the GAN-based denoising framework. Its basic idea is to iteratively replace
the previous less powerful denoiser in the filter-guided noise extraction
module with the current powerful denoiser. This process generates better
synthetic clean-noisy image pairs, leading to a more powerful denoiser for the
next iteration. This baseline ensures the stability and effectiveness of the
training network. The experimental results demonstrate the superiority of our
method over state-of-the-art unsupervised methods.Comment: Accepted to ICCV 202
ZegCLIP: Towards Adapting CLIP for Zero-shot Semantic Segmentation
Recently, CLIP has been applied to pixel-level zero-shot learning tasks via a
two-stage scheme. The general idea is to first generate class-agnostic region
proposals and then feed the cropped proposal regions to CLIP to utilize its
image-level zero-shot classification capability. While effective, such a scheme
requires two image encoders, one for proposal generation and one for CLIP,
leading to a complicated pipeline and high computational cost. In this work, we
pursue a simpler-and-efficient one-stage solution that directly extends CLIP's
zero-shot prediction capability from image to pixel level. Our investigation
starts with a straightforward extension as our baseline that generates semantic
masks by comparing the similarity between text and patch embeddings extracted
from CLIP. However, such a paradigm could heavily overfit the seen classes and
fail to generalize to unseen classes. To handle this issue, we propose three
simple-but-effective designs and figure out that they can significantly retain
the inherent zero-shot capacity of CLIP and improve pixel-level generalization
ability. Incorporating those modifications leads to an efficient zero-shot
semantic segmentation system called ZegCLIP. Through extensive experiments on
three public benchmarks, ZegCLIP demonstrates superior performance,
outperforming the state-of-the-art methods by a large margin under both
"inductive" and "transductive" zero-shot settings. In addition, compared with
the two-stage method, our one-stage ZegCLIP achieves a speedup of about 5 times
faster during inference. We release the code at
https://github.com/ZiqinZhou66/ZegCLIP.git.Comment: 12 pages, 8 figure
- …