300 research outputs found
CNN-based fast source device identification
Source identification is an important topic in image forensics, since it
allows to trace back the origin of an image. This represents a precious
information to claim intellectual property but also to reveal the authors of
illicit materials. In this paper we address the problem of device
identification based on sensor noise and propose a fast and accurate solution
using convolutional neural networks (CNNs). Specifically, we propose a
2-channel-based CNN that learns a way of comparing camera fingerprint and image
noise at patch level. The proposed solution turns out to be much faster than
the conventional approach and to ensure an increased accuracy. This makes the
approach particularly suitable in scenarios where large databases of images are
analyzed, like over social networks. In this vein, since images uploaded on
social media usually undergo at least two compression stages, we include
investigations on double JPEG compressed images, always reporting higher
accuracy than standard approaches
Training CNNs in Presence of JPEG Compression: Multimedia Forensics vs Computer Vision
Convolutional Neural Networks (CNNs) have proved very accurate in multiple
computer vision image classification tasks that required visual inspection in
the past (e.g., object recognition, face detection, etc.). Motivated by these
astonishing results, researchers have also started using CNNs to cope with
image forensic problems (e.g., camera model identification, tampering
detection, etc.). However, in computer vision, image classification methods
typically rely on visual cues easily detectable by human eyes. Conversely,
forensic solutions rely on almost invisible traces that are often very subtle
and lie in the fine details of the image under analysis. For this reason,
training a CNN to solve a forensic task requires some special care, as common
processing operations (e.g., resampling, compression, etc.) can strongly hinder
forensic traces. In this work, we focus on the effect that JPEG has on CNN
training considering different computer vision and forensic image
classification problems. Specifically, we consider the issues that rise from
JPEG compression and misalignment of the JPEG grid. We show that it is
necessary to consider these effects when generating a training dataset in order
to properly train a forensic detector not losing generalization capability,
whereas it is almost possible to ignore these effects for computer vision
tasks
Incisal apical root resorption evaluation after low-friction orthodontic treatment using two-dimensional radiographic imaging and trigonometric correction
BACKGROUND:
Root resorption shall be taken into consideration during every orthodontic treatment, and it can be effected by the use of different techniques, such as the application of low friction mechanics. However, its routinely assessment on orthopantomography has limitations related to distortions and changes in dental inclination.
AIM:
The aim of this investigation was to evaluate the severity of apical root resorption of maxillary and mandibular incisors after low-friction orthodontic treatment, using the combination of panoramic and lateral radiographs, and applying a trigonometric correction.
SETTINGS AND DESIGN:
A hospital based Retrospective study at the orthodontic Department (Dental School, University of Brescia, Spedali Civili di Brescia, Brescia, Italy).
MATERIALS AND METHODS:
Ninety-three subjects (53 females and 40 males; mean age, 14 years) with mild teeth crowding were treated without extractions by the same operator using a low-friction fixed appliance following an integrated straight wire (ISW) protocol. The pre- and post-treatment tooth lengths of the maxillary and mandibular incisors were measured on panoramic radiographs. A trigonometric factor of correction for the pre-treatment length was calculated based on the difference between the pre and post-treatment incisal inclination on lateral cephalograms.
STATISTICAL ANALYSIS:
The changes in lengths were investigated using the Student's t-test for paired values (p<0.05).
RESULTS:
Maxillary central incisors showed no changes (0.3%, 0.6%), maxillary lateral incisors showed a small increase (1.4%, 1.8%) that was attributed to the completion of root development in younger patients, mandibular central and lateral incisors underwent slight resorption (-3.1%, -3.4%). A statistically significant difference was found for the mandibular incisors but not for the maxillary ones.
CONCLUSION:
In patients with mild crowding and consequent low amount of root movement, a low-friction orthodontic treatment can lead to slight apical root resorption, mainly involving lower incisors. The use of a trigonometric correction in the panoramic radiograph analysis may reduce the limitations of this 2D evaluation
DIPPAS: A Deep Image Prior PRNU Anonymization Scheme
Source device identification is an important topic in image forensics since
it allows to trace back the origin of an image. Its forensics counter-part is
source device anonymization, that is, to mask any trace on the image that can
be useful for identifying the source device. A typical trace exploited for
source device identification is the Photo Response Non-Uniformity (PRNU), a
noise pattern left by the device on the acquired images. In this paper, we
devise a methodology for suppressing such a trace from natural images without
significant impact on image quality. Specifically, we turn PRNU anonymization
into an optimization problem in a Deep Image Prior (DIP) framework. In a
nutshell, a Convolutional Neural Network (CNN) acts as generator and returns an
image that is anonymized with respect to the source PRNU, still maintaining
high visual quality. With respect to widely-adopted deep learning paradigms,
our proposed CNN is not trained on a set of input-target pairs of images.
Instead, it is optimized to reconstruct the PRNU-free image from the original
image under analysis itself. This makes the approach particularly suitable in
scenarios where large heterogeneous databases are analyzed and prevents any
problem due to lack of generalization. Through numerical examples on publicly
available datasets, we prove our methodology to be effective compared to
state-of-the-art techniques
Super-Resolution of BVOC Emission Maps Via Domain Adaptation
Enhancing the resolution of Biogenic Volatile Organic Compound (BVOC)
emission maps is a critical task in remote sensing. Recently, some
Super-Resolution (SR) methods based on Deep Learning (DL) have been proposed,
leveraging data from numerical simulations for their training process. However,
when dealing with data derived from satellite observations, the reconstruction
is particularly challenging due to the scarcity of measurements to train SR
algorithms with. In our work, we aim at super-resolving low resolution emission
maps derived from satellite observations by leveraging the information of
emission maps obtained through numerical simulations. To do this, we combine a
SR method based on DL with Domain Adaptation (DA) techniques, harmonizing the
different aggregation strategies and spatial information used in simulated and
observed domains to ensure compatibility. We investigate the effectiveness of
DA strategies at different stages by systematically varying the number of
simulated and observed emissions used, exploring the implications of data
scarcity on the adaptation strategies. To the best of our knowledge, there are
no prior investigations of DA in satellite-derived BVOC maps enhancement. Our
work represents a first step toward the development of robust strategies for
the reconstruction of observed BVOC emissions.Comment: 4 pages, 4 figures, 1 table, accepted at IEEE-IGARSS 202
- …