76,013 research outputs found
StrokeGAN: Reducing Mode Collapse in Chinese Font Generation via Stroke Encoding
The generation of stylish Chinese fonts is an important problem involved in
many applications. Most of existing generation methods are based on the deep
generative models, particularly, the generative adversarial networks (GAN)
based models. However, these deep generative models may suffer from the mode
collapse issue, which significantly degrades the diversity and quality of
generated results. In this paper, we introduce a one-bit stroke encoding to
capture the key mode information of Chinese characters and then incorporate it
into CycleGAN, a popular deep generative model for Chinese font generation. As
a result we propose an efficient method called StrokeGAN, mainly motivated by
the observation that the stroke encoding contains amount of mode information of
Chinese characters. In order to reconstruct the one-bit stroke encoding of the
associated generated characters, we introduce a stroke-encoding reconstruction
loss imposed on the discriminator. Equipped with such one-bit stroke encoding
and stroke-encoding reconstruction loss, the mode collapse issue of CycleGAN
can be significantly alleviated, with an improved preservation of strokes and
diversity of generated characters. The effectiveness of StrokeGAN is
demonstrated by a series of generation tasks over nine datasets with different
fonts. The numerical results demonstrate that StrokeGAN generally outperforms
the state-of-the-art methods in terms of content and recognition accuracies, as
well as certain stroke error, and also generates more realistic characters.Comment: 10 pages, our codes and data are available at:
https://github.com/JinshanZeng/StrokeGA
Learning Hierarchical Representations For Video Analysis Using Deep Learning
With the exponential growth of the digital data, video content analysis (e.g., action, event recognition) has been drawing increasing attention from computer vision researchers. Effective modeling of the objects, scenes, and motions is critical for visual understanding. Recently there has been a growing interest in the bio-inspired deep learning models, which has shown impressive results in speech and object recognition. The deep learning models are formed by the composition of multiple non-linear transformations of the data, with the goal of yielding more abstract and ultimately more useful representations. The advantages of the deep models are three fold: 1) They learn the features directly from the raw signal in contrast to the hand-designed features. 2) The learning can be unsupervised, which is suitable for large data where labeling all the data is expensive and unpractical. 3) They learn a hierarchy of features one level at a time and the layerwise stacking of feature extraction, this often yields better representations. However, not many deep learning models have been proposed to solve the problems in video analysis, especially videos “in a wild”. Most of them are either dealing with simple datasets, or limited to the low-level local spatial-temporal feature descriptors for action recognition. Moreover, as the learning algorithms are unsupervised, the learned features preserve generative properties rather than the discriminative ones which are more favorable in the classification tasks. In this context, the thesis makes two major contributions. First, we propose several formulations and extensions of deep learning methods which learn hierarchical representations for three challenging video analysis tasks, including complex event recognition, object detection in videos and measuring action similarity. The proposed methods are extensively demonstrated for each work on the state-of-the-art challenging datasets. Besides learning the low-level local features, higher level representations are further designed to be learned in the context of applications. The data-driven concept representations and sparse representation of the events are learned for complex event recognition; the representations for object body parts iii and structures are learned for object detection in videos; and the relational motion features and similarity metrics between video pairs are learned simultaneously for action verification. Second, in order to learn discriminative and compact features, we propose a new feature learning method using a deep neural network based on auto encoders. It differs from the existing unsupervised feature learning methods in two ways: first it optimizes both discriminative and generative properties of the features simultaneously, which gives our features a better discriminative ability. Second, our learned features are more compact, while the unsupervised feature learning methods usually learn a redundant set of over-complete features. Extensive experiments with quantitative and qualitative results on the tasks of human detection and action verification demonstrate the superiority of our proposed models
Recommended from our members
Neural Generative Models and Representation Learning for Information Retrieval
Information Retrieval (IR) concerns about the structure, analysis, organization, storage, and retrieval of information. Among different retrieval models proposed in the past decades, generative retrieval models, especially those under the statistical probabilistic framework, are one of the most popular techniques that have been widely applied to Information Retrieval problems. While they are famous for their well-grounded theory and good empirical performance in text retrieval, their applications in IR are often limited by their complexity and low extendability in the modeling of high-dimensional information. Recently, advances in deep learning techniques provide new opportunities for representation learning and generative models for information retrieval. In contrast to statistical models, neural models have much more flexibility because they model information and data correlation in latent spaces without explicitly relying on any prior knowledge. Previous studies on pattern recognition and natural language processing have shown that semantically meaningful representations of text, images, and many types of information can be acquired with neural models through supervised or unsupervised training. Nonetheless, the effectiveness of neural models for information retrieval is mostly unexplored. In this thesis, we study how to develop new generative models and representation learning frameworks with neural models for information retrieval. Specifically, our contributions include three main components: (1) Theoretical Analysis: We present the first theoretical analysis and adaptation of existing neural embedding models for ad-hoc retrieval tasks; (2) Design Practice: Based on our experience and knowledge, we show how to design an embedding-based neural generative model for practical information retrieval tasks such as personalized product search; And (3) Generic Framework: We further generalize our proposed neural generative framework for complicated heterogeneous information retrieval scenarios that concern text, images, knowledge entities, and their relationships. Empirical results show that the proposed neural generative framework can effectively learn information representations and construct retrieval models that outperform the state-of-the-art systems in a variety of IR tasks
- …