8,441 research outputs found
Distinguishing Computer-generated Graphics from Natural Images Based on Sensor Pattern Noise and Deep Learning
Computer-generated graphics (CGs) are images generated by computer software.
The~rapid development of computer graphics technologies has made it easier to
generate photorealistic computer graphics, and these graphics are quite
difficult to distinguish from natural images (NIs) with the naked eye. In this
paper, we propose a method based on sensor pattern noise (SPN) and deep
learning to distinguish CGs from NIs. Before being fed into our convolutional
neural network (CNN)-based model, these images---CGs and NIs---are clipped into
image patches. Furthermore, three high-pass filters (HPFs) are used to remove
low-frequency signals, which represent the image content. These filters are
also used to reveal the residual signal as well as SPN introduced by the
digital camera device. Different from the traditional methods of distinguishing
CGs from NIs, the proposed method utilizes a five-layer CNN to classify the
input image patches. Based on the classification results of the image patches,
we deploy a majority vote scheme to obtain the classification results for the
full-size images. The~experiments have demonstrated that (1) the proposed
method with three HPFs can achieve better results than that with only one HPF
or no HPF and that (2) the proposed method with three HPFs achieves 100\%
accuracy, although the NIs undergo a JPEG compression with a quality factor of
75.Comment: This paper has been published by Sensors. doi:10.3390/s18041296;
Sensors 2018, 18(4), 129
Fine-graind Image Classification via Combining Vision and Language
Fine-grained image classification is a challenging task due to the large
intra-class variance and small inter-class variance, aiming at recognizing
hundreds of sub-categories belonging to the same basic-level category. Most
existing fine-grained image classification methods generally learn part
detection models to obtain the semantic parts for better classification
accuracy. Despite achieving promising results, these methods mainly have two
limitations: (1) not all the parts which obtained through the part detection
models are beneficial and indispensable for classification, and (2)
fine-grained image classification requires more detailed visual descriptions
which could not be provided by the part locations or attribute annotations. For
addressing the above two limitations, this paper proposes the two-stream model
combining vision and language (CVL) for learning latent semantic
representations. The vision stream learns deep representations from the
original visual information via deep convolutional neural network. The language
stream utilizes the natural language descriptions which could point out the
discriminative parts or characteristics for each image, and provides a flexible
and compact way of encoding the salient visual aspects for distinguishing
sub-categories. Since the two streams are complementary, combining the two
streams can further achieves better classification accuracy. Comparing with 12
state-of-the-art methods on the widely used CUB-200-2011 dataset for
fine-grained image classification, the experimental results demonstrate our CVL
approach achieves the best performance.Comment: 9 pages, to appear in CVPR 201
Manipulating Attributes of Natural Scenes via Hallucination
In this study, we explore building a two-stage framework for enabling users
to directly manipulate high-level attributes of a natural scene. The key to our
approach is a deep generative network which can hallucinate images of a scene
as if they were taken at a different season (e.g. during winter), weather
condition (e.g. in a cloudy day) or time of the day (e.g. at sunset). Once the
scene is hallucinated with the given attributes, the corresponding look is then
transferred to the input image while preserving the semantic details intact,
giving a photo-realistic manipulation result. As the proposed framework
hallucinates what the scene will look like, it does not require any reference
style image as commonly utilized in most of the appearance or style transfer
approaches. Moreover, it allows to simultaneously manipulate a given scene
according to a diverse set of transient attributes within a single model,
eliminating the need of training multiple networks per each translation task.
Our comprehensive set of qualitative and quantitative results demonstrate the
effectiveness of our approach against the competing methods.Comment: Accepted for publication in ACM Transactions on Graphic
- …