5,286 research outputs found
Color Behavior Of BL Lacertae Object OJ 287 During Optical Outburst
This paper aims to study the color behavior of the BL Lac object OJ 287
during optical outburst. According to the revisit of the data from the OJ-94
monitoring project and the analysis the data obtained with the 60/90 cm Schmidt
Telescope of NAOC, we found a bluer-when-brighter chromatism in this object.
The amplitude of variation tends to decrease with the decrease of frequency.
These results are consistent with the shock-in-jet model. We made some
simulations and confirmed that both amplitude difference and time delay between
variations at different wavelengths can result in the phenomenon of
bluer-when-brighter. Our observations confirmed that OJ 287 underwent a
double-peaked outburst after about 12 years from 1996, which provides further
evidence for the binary black hole model in this object.Comment: 25 pages, 13 figure
Image Classification with CNN-based Fisher Vector Coding
Fisher vector coding methods have been demonstrated to be effective for image classification. With the help of convolutional neural networks (CNN), several Fisher vector coding methods have shown state-of-the-art performance by adopting the activations of a single fully-connected layer as region features. These methods generally exploit a diagonal Gaussian mixture model (GMM) to describe the generative process of region features. However, it is difficult to model the complex distribution of high-dimensional feature space with a limited number of Gaussians obtained by unsupervised learning. Simply increasing the number of Gaussians turns out to be inefficient and computationally impractical.
To address this issue, we re-interpret a pre-trained CNN as the probabilistic discriminative model, and present a CNN based Fisher vector coding method, termed CNN-FVC. Specifically, activations of the intermediate fully-connected and output soft-max layers are exploited to derive the posteriors, mean and covariance parameters for Fisher vector coding implicitly. To further improve the efficiency, we convert the pre-trained CNN to a fully convolutional one to extract the region features. Extensive experiments have been conducted on two standard scene benchmarks (i.e. SUN397 and MIT67) to evaluate the effectiveness of the proposed method. Classification accuracies of 60.7% and 82.1% are achieved on the SUN397 and MIT67 benchmarks respectively, outperforming previous state-of-the-art approaches. Furthermore, the method is complementary to GMM-FVC methods, allowing a simple fusion scheme to further improve performance to 61.1% and 83.1% respectively
Domain Adaptation and Image Classification via Deep Conditional Adaptation Network
Unsupervised domain adaptation aims to generalize the supervised model
trained on a source domain to an unlabeled target domain. Marginal distribution
alignment of feature spaces is widely used to reduce the domain discrepancy
between the source and target domains. However, it assumes that the source and
target domains share the same label distribution, which limits their
application scope. In this paper, we consider a more general application
scenario where the label distributions of the source and target domains are not
the same. In this scenario, marginal distribution alignment-based methods will
be vulnerable to negative transfer. To address this issue, we propose a novel
unsupervised domain adaptation method, Deep Conditional Adaptation Network
(DCAN), based on conditional distribution alignment of feature spaces. To be
specific, we reduce the domain discrepancy by minimizing the Conditional
Maximum Mean Discrepancy between the conditional distributions of deep features
on the source and target domains, and extract the discriminant information from
target domain by maximizing the mutual information between samples and the
prediction labels. In addition, DCAN can be used to address a special scenario,
Partial unsupervised domain adaptation, where the target domain category is a
subset of the source domain category. Experiments on both unsupervised domain
adaptation and Partial unsupervised domain adaptation show that DCAN achieves
superior classification performance over state-of-the-art methods. In
particular, DCAN achieves great improvement in the tasks with large difference
in label distributions (6.1\% on SVHN to MNIST, 5.4\% in UDA tasks on
Office-Home and 4.5\% in Partial UDA tasks on Office-Home)
Unsupervised Domain Adaptation via Discriminative Manifold Propagation
Unsupervised domain adaptation is effective in leveraging rich information
from a labeled source domain to an unlabeled target domain. Though deep
learning and adversarial strategy made a significant breakthrough in the
adaptability of features, there are two issues to be further studied. First,
hard-assigned pseudo labels on the target domain are arbitrary and error-prone,
and direct application of them may destroy the intrinsic data structure.
Second, batch-wise training of deep learning limits the characterization of the
global structure. In this paper, a Riemannian manifold learning framework is
proposed to achieve transferability and discriminability simultaneously. For
the first issue, this framework establishes a probabilistic discriminant
criterion on the target domain via soft labels. Based on pre-built prototypes,
this criterion is extended to a global approximation scheme for the second
issue. Manifold metric alignment is adopted to be compatible with the embedding
space. The theoretical error bounds of different alignment metrics are derived
for constructive guidance. The proposed method can be used to tackle a series
of variants of domain adaptation problems, including both vanilla and partial
settings. Extensive experiments have been conducted to investigate the method
and a comparative study shows the superiority of the discriminative manifold
learning framework.Comment: To be published in IEEE Transactions on Pattern Analysis and Machine
Intelligenc
- …