17,427 research outputs found
2D Color Code Interference Cancellation by Super Imposing Methodology
Abstract-Today the 2-D barcodes have become more popular for information embedding. To encode information with high spatial density while ensuring robust reading by an optical system is the main goal of a barcode system. To enhance the density of information, different ink colors could be used. A High Capacity Color Barcode framework is proposed by exploiting the spectral diversity afforded by the Cyan, Magenta, Yellow print colorant channels and the complimentary Red, Green and Blue channels, respectively, used for capturing color images. Here a three-fold increase in the data rate is achieved by encoding independent data in the C, M, and Y print colorant channels and decoding the data from the complimentary R, G, and B channels captured via a mobile phone camera. This paper presents a framework of color barcode for mobile phone applications by exploiting the spectral diversity afforded by the cyan (C), magenta (M), and yellow (Y) print colorant channels which is more commonly used for color printing and the complementary in order to red (R), green (G), and blue (B) channels, respectively, used for capturing color images. In this paper the system exploit this spectral diversity to understand three-fold increase in the data rate by encoding independent data in the C, M, and Y print colorant channels and decoding the data from the complementary R, G, and B channels captured via a mobile phone camera. To mitigate the effect of cross-channel interference among the print colorant and capture color channels, the system develops an algorithm for interference cancellation which is based on a physically-motivated mathematical model for the print and capture processes. To collect the model parameters which are necessary for cross-channel interference cancellation, this scheme proposes a super imposing methodology. Experimental result clears that the scheme framework successfully overcomes the impact of the color interference, providing a low bit error rate and a high decoding rate for each of the colorant channels when used with a corresponding error correction scheme
DSLR-Quality Photos on Mobile Devices with Deep Convolutional Networks
Despite a rapid rise in the quality of built-in smartphone cameras, their
physical limitations - small sensor size, compact lenses and the lack of
specific hardware, - impede them to achieve the quality results of DSLR
cameras. In this work we present an end-to-end deep learning approach that
bridges this gap by translating ordinary photos into DSLR-quality images. We
propose learning the translation function using a residual convolutional neural
network that improves both color rendition and image sharpness. Since the
standard mean squared loss is not well suited for measuring perceptual image
quality, we introduce a composite perceptual error function that combines
content, color and texture losses. The first two losses are defined
analytically, while the texture loss is learned in an adversarial fashion. We
also present DPED, a large-scale dataset that consists of real photos captured
from three different phones and one high-end reflex camera. Our quantitative
and qualitative assessments reveal that the enhanced image quality is
comparable to that of DSLR-taken photos, while the methodology is generalized
to any type of digital camera
Source identification for mobile devices, based on wavelet transforms combined with sensor imperfections
One of the most relevant applications of digital image forensics is to accurately identify the device used for taking a given set of images, a problem called source identification. This paper studies recent developments in the field and proposes the mixture of two techniques (Sensor Imperfections and Wavelet Transforms) to get better source identification of images generated with mobile devices. Our results show that Sensor Imperfections and Wavelet Transforms can jointly serve as good forensic features to help trace the source camera of images produced by mobile phones. Furthermore, the model proposed here can also determine with high precision both the brand and model of the device
WESPE: Weakly Supervised Photo Enhancer for Digital Cameras
Low-end and compact mobile cameras demonstrate limited photo quality mainly
due to space, hardware and budget constraints. In this work, we propose a deep
learning solution that translates photos taken by cameras with limited
capabilities into DSLR-quality photos automatically. We tackle this problem by
introducing a weakly supervised photo enhancer (WESPE) - a novel image-to-image
Generative Adversarial Network-based architecture. The proposed model is
trained by under weak supervision: unlike previous works, there is no need for
strong supervision in the form of a large annotated dataset of aligned
original/enhanced photo pairs. The sole requirement is two distinct datasets:
one from the source camera, and one composed of arbitrary high-quality images
that can be generally crawled from the Internet - the visual content they
exhibit may be unrelated. Hence, our solution is repeatable for any camera:
collecting the data and training can be achieved in a couple of hours. In this
work, we emphasize on extensive evaluation of obtained results. Besides
standard objective metrics and subjective user study, we train a virtual rater
in the form of a separate CNN that mimics human raters on Flickr data and use
this network to get reference scores for both original and enhanced photos. Our
experiments on the DPED, KITTI and Cityscapes datasets as well as pictures from
several generations of smartphones demonstrate that WESPE produces comparable
or improved qualitative results with state-of-the-art strongly supervised
methods
- …