506 research outputs found
Weakly-Supervised Neural Text Classification
Deep neural networks are gaining increasing popularity for the classic text
classification task, due to their strong expressive power and less requirement
for feature engineering. Despite such attractiveness, neural text
classification models suffer from the lack of training data in many real-world
applications. Although many semi-supervised and weakly-supervised text
classification models exist, they cannot be easily applied to deep neural
models and meanwhile support limited supervision types. In this paper, we
propose a weakly-supervised method that addresses the lack of training data in
neural text classification. Our method consists of two modules: (1) a
pseudo-document generator that leverages seed information to generate
pseudo-labeled documents for model pre-training, and (2) a self-training module
that bootstraps on real unlabeled data for model refinement. Our method has the
flexibility to handle different types of weak supervision and can be easily
integrated into existing deep neural models for text classification. We have
performed extensive experiments on three real-world datasets from different
domains. The results demonstrate that our proposed method achieves inspiring
performance without requiring excessive training data and outperforms baseline
methods significantly.Comment: CIKM 2018 Full Pape
Image Restoration using Total Variation Regularized Deep Image Prior
In the past decade, sparsity-driven regularization has led to significant
improvements in image reconstruction. Traditional regularizers, such as total
variation (TV), rely on analytical models of sparsity. However, increasingly
the field is moving towards trainable models, inspired from deep learning. Deep
image prior (DIP) is a recent regularization framework that uses a
convolutional neural network (CNN) architecture without data-driven training.
This paper extends the DIP framework by combining it with the traditional TV
regularization. We show that the inclusion of TV leads to considerable
performance gains when tested on several traditional restoration tasks such as
image denoising and deblurring
SIMBA: scalable inversion in optical tomography using deep denoising priors
Two features desired in a three-dimensional (3D) optical tomographic image reconstruction algorithm are the ability to reduce imaging artifacts and to do fast processing of large data volumes. Traditional iterative inversion algorithms are impractical in this context due to their heavy computational and memory requirements. We propose and experimentally validate a novel scalable iterative mini-batch algorithm (SIMBA) for fast and high-quality optical tomographic imaging. SIMBA enables highquality imaging by combining two complementary information sources: the physics of the imaging system characterized by its forward model and the imaging prior characterized by a denoising deep neural net. SIMBA easily scales to very large 3D tomographic datasets by processing only a small subset of measurements at each iteration. We establish the theoretical fixedpoint convergence of SIMBA under nonexpansive denoisers for convex data-fidelity terms. We validate SIMBA on both simulated and experimentally collected intensity diffraction tomography (IDT) datasets. Our results show that SIMBA can significantly reduce the computational burden of 3D image formation without sacrificing the imaging quality.https://arxiv.org/abs/1911.13241First author draf
University-Industry Technology Transfer: Empirical Findings from Chinese Industrial Firms
The knowledge and innovation generated by researchers at universities is transferred to industries through patent licensing, leading to the commercialization of academic output. In order to investigate the development of Chinese university-industry technology transfer and whether this kind of collaboration may affect a firm's innovation output, we collected approximately 6400 license contracts made between more than 4000 Chinese firms and 300 Chinese universities for the period between 2009 and 2014. This is the first study on Chinese university-industry knowledge transfer using a bipartite social network analysis (SNA) method, which emphasizes centrality estimates. We are able to investigate empirically how patent license transfer behavior may affect each firm's innovative output by allocating a centrality score to each firm in the university-firm technology transfer network. We elucidate the academic-industry knowledge by visualizing flow patterns for different regions with the SNA tool, Gephi. We find that innovation capabilities, R&D resources, and technology transfer performance all vary across China, and that patent licensing networks present clear small-world phenomena. We also highlight the Bipartite Graph Reinforcement Model (BGRM) and BiRank centrality in the bipartite network. Our empirical results reveal that firms with high BGRM and BiRank centrality scores, long history, and fewer employees have greater innovative output
Attention, Please! Adversarial Defense via Attention Rectification and Preservation
This study provides a new understanding of the adversarial attack problem by
examining the correlation between adversarial attack and visual attention
change. In particular, we observed that: (1) images with incomplete attention
regions are more vulnerable to adversarial attacks; and (2) successful
adversarial attacks lead to deviated and scattered attention map. Accordingly,
an attention-based adversarial defense framework is designed to simultaneously
rectify the attention map for prediction and preserve the attention area
between adversarial and original images. The problem of adding iteratively
attacked samples is also discussed in the context of visual attention change.
We hope the attention-related data analysis and defense solution in this study
will shed some light on the mechanism behind the adversarial attack and also
facilitate future adversarial defense/attack model design
- …