87 research outputs found
Versatile soliton emission from a WS2 mode-locked fiber laser
Recently, few-layer tungsten disulfide (WS2), as a shining 2D material, has been discovered to possess both the saturable absorption ability and large nonlinear refractive index. Here, we demonstrate versatile soliton pulses in a passively mode-locked fiber laser with a WS2-deposited microfiber. The few-layer WS2 is prepared by the liquid-phase exfoliation method and transferred onto a microfiber by the optical deposition method. Study found, the WS2-deposited microfiber can operate simultaneously as a mode-locker and a high-nonlinear device. In experiment, by further inserting the WS2 device into the fiber laser, besides the dual-wavelength soliton, noise-like soliton pulse, conventional soliton and its harmonic form are obtained by properly adjusting the pump strength and the polarization states. For the dual-wavelength soliton pulses and noise-like pulse, the maximum output power of 14.2 mW and pulse energy of 4.74 nJ is obtained, respectively. In addition, we also achieve the maximum harmonic number (135) of conventional soliton, corresponding to a repetition rate of ∼497.5 MHz. Our study shows clearly that WS2-deposited microfiber can be as a high-nonlinear photonic device for studying a plenty of nonlinear soliton phenomena
Domain Adaptation and Image Classification via Deep Conditional Adaptation Network
Unsupervised domain adaptation aims to generalize the supervised model
trained on a source domain to an unlabeled target domain. Marginal distribution
alignment of feature spaces is widely used to reduce the domain discrepancy
between the source and target domains. However, it assumes that the source and
target domains share the same label distribution, which limits their
application scope. In this paper, we consider a more general application
scenario where the label distributions of the source and target domains are not
the same. In this scenario, marginal distribution alignment-based methods will
be vulnerable to negative transfer. To address this issue, we propose a novel
unsupervised domain adaptation method, Deep Conditional Adaptation Network
(DCAN), based on conditional distribution alignment of feature spaces. To be
specific, we reduce the domain discrepancy by minimizing the Conditional
Maximum Mean Discrepancy between the conditional distributions of deep features
on the source and target domains, and extract the discriminant information from
target domain by maximizing the mutual information between samples and the
prediction labels. In addition, DCAN can be used to address a special scenario,
Partial unsupervised domain adaptation, where the target domain category is a
subset of the source domain category. Experiments on both unsupervised domain
adaptation and Partial unsupervised domain adaptation show that DCAN achieves
superior classification performance over state-of-the-art methods. In
particular, DCAN achieves great improvement in the tasks with large difference
in label distributions (6.1\% on SVHN to MNIST, 5.4\% in UDA tasks on
Office-Home and 4.5\% in Partial UDA tasks on Office-Home)
Unsupervised Domain Adaptation via Discriminative Manifold Embedding and Alignment
Unsupervised domain adaptation is effective in leveraging the rich
information from the source domain to the unsupervised target domain. Though
deep learning and adversarial strategy make an important breakthrough in the
adaptability of features, there are two issues to be further explored. First,
the hard-assigned pseudo labels on the target domain are risky to the intrinsic
data structure. Second, the batch-wise training manner in deep learning limits
the description of the global structure. In this paper, a Riemannian manifold
learning framework is proposed to achieve transferability and discriminability
consistently. As to the first problem, this method establishes a probabilistic
discriminant criterion on the target domain via soft labels. Further, this
criterion is extended to a global approximation scheme for the second issue;
such approximation is also memory-saving. The manifold metric alignment is
exploited to be compatible with the embedding space. A theoretical error bound
is derived to facilitate the alignment. Extensive experiments have been
conducted to investigate the proposal and results of the comparison study
manifest the superiority of consistent manifold learning framework.Comment: Accepted to AAAI 2020. Code available:
\<https://github.com/LavieLuo/DRMEA
SVCNet: Scribble-based Video Colorization Network with Temporal Aggregation
In this paper, we propose a scribble-based video colorization network with
temporal aggregation called SVCNet. It can colorize monochrome videos based on
different user-given color scribbles. It addresses three common issues in the
scribble-based video colorization area: colorization vividness, temporal
consistency, and color bleeding. To improve the colorization quality and
strengthen the temporal consistency, we adopt two sequential sub-networks in
SVCNet for precise colorization and temporal smoothing, respectively. The first
stage includes a pyramid feature encoder to incorporate color scribbles with a
grayscale frame, and a semantic feature encoder to extract semantics. The
second stage finetunes the output from the first stage by aggregating the
information of neighboring colorized frames (as short-range connections) and
the first colorized frame (as a long-range connection). To alleviate the color
bleeding artifacts, we learn video colorization and segmentation
simultaneously. Furthermore, we set the majority of operations on a fixed small
image resolution and use a Super-resolution Module at the tail of SVCNet to
recover original sizes. It allows the SVCNet to fit different image resolutions
at the inference. Finally, we evaluate the proposed SVCNet on DAVIS and Videvo
benchmarks. The experimental results demonstrate that SVCNet produces both
higher-quality and more temporally consistent videos than other well-known
video colorization approaches. The codes and models can be found at
https://github.com/zhaoyuzhi/SVCNet.Comment: accepted by IEEE Transactions on Image Processing (TIP
- …