87,207 research outputs found

    Open Set Synthetic Image Source Attribution

    Full text link
    AI-generated images have become increasingly realistic and have garnered significant public attention. While synthetic images are intriguing due to their realism, they also pose an important misinformation threat. To address this new threat, researchers have developed multiple algorithms to detect synthetic images and identify their source generators. However, most existing source attribution techniques are designed to operate in a closed-set scenario, i.e. they can only be used to discriminate between known image generators. By contrast, new image-generation techniques are rapidly emerging. To contend with this, there is a great need for open-set source attribution techniques that can identify when synthetic images have originated from new, unseen generators. To address this problem, we propose a new metric learning-based approach. Our technique works by learning transferrable embeddings capable of discriminating between generators, even when they are not seen during training. An image is first assigned to a candidate generator, then is accepted or rejected based on its distance in the embedding space from known generators' learned reference points. Importantly, we identify that initializing our source attribution embedding network by pretraining it on image camera identification can improve our embeddings' transferability. Through a series of experiments, we demonstrate our approach's ability to attribute the source of synthetic images in open-set scenarios

    An In-Depth Study on Open-Set Camera Model Identification

    Full text link
    Camera model identification refers to the problem of linking a picture to the camera model used to shoot it. As this might be an enabling factor in different forensic applications to single out possible suspects (e.g., detecting the author of child abuse or terrorist propaganda material), many accurate camera model attribution methods have been developed in the literature. One of their main drawbacks, however, is the typical closed-set assumption of the problem. This means that an investigated photograph is always assigned to one camera model within a set of known ones present during investigation, i.e., training time, and the fact that the picture can come from a completely unrelated camera model during actual testing is usually ignored. Under realistic conditions, it is not possible to assume that every picture under analysis belongs to one of the available camera models. To deal with this issue, in this paper, we present the first in-depth study on the possibility of solving the camera model identification problem in open-set scenarios. Given a photograph, we aim at detecting whether it comes from one of the known camera models of interest or from an unknown one. We compare different feature extraction algorithms and classifiers specially targeting open-set recognition. We also evaluate possible open-set training protocols that can be applied along with any open-set classifier, observing that a simple of those alternatives obtains best results. Thorough testing on independent datasets shows that it is possible to leverage a recently proposed convolutional neural network as feature extractor paired with a properly trained open-set classifier aiming at solving the open-set camera model attribution problem even to small-scale image patches, improving over state-of-the-art available solutions.Comment: Published through IEEE Access journa

    CNN-based fast source device identification

    Full text link
    Source identification is an important topic in image forensics, since it allows to trace back the origin of an image. This represents a precious information to claim intellectual property but also to reveal the authors of illicit materials. In this paper we address the problem of device identification based on sensor noise and propose a fast and accurate solution using convolutional neural networks (CNNs). Specifically, we propose a 2-channel-based CNN that learns a way of comparing camera fingerprint and image noise at patch level. The proposed solution turns out to be much faster than the conventional approach and to ensure an increased accuracy. This makes the approach particularly suitable in scenarios where large databases of images are analyzed, like over social networks. In this vein, since images uploaded on social media usually undergo at least two compression stages, we include investigations on double JPEG compressed images, always reporting higher accuracy than standard approaches

    Dynamic pore-scale reservoir-condition imaging of reaction in carbonates using synchrotron fast tomography

    Get PDF
    Synchrotron fast tomography was used to dynamically image dissolution of limestone in the presence of CO2-saturated brine at reservoir conditions. 100 scans were taken at a 6.1 µm resolution over a period of 2 hours. Underground storage permanence is a major concern for carbon capture and storage. Pumping CO2 into carbonate reservoirs has the potential to dissolve geologic seals and allow CO2 to escape. However, the dissolution processes at reservoir conditions are poorly understood. Thus, time-resolved experiments are needed to observe and predict the nature and rate of dissolution at the pore scale. Synchrotron fast tomography is a method of taking high-resolution time-resolved images of complex pore structures much more quickly than traditional µ-CT . The Diamond Lightsource Pink Beam was used to dynamically image dissolution of limestone in the presence of CO2-saturated brine at reservoir conditions. 100 scans were taken at a 6.1 µm resolution over a period of 2 hours. The images were segmented and the porosity and permeability were measured using image analysis and network extraction. Porosity increased uniformly along the length of the sample; however, the rate of increase of both porosity and permeability slowed at later times

    Mitigation of H.264 and H.265 Video Compression for Reliable PRNU Estimation

    Full text link
    The photo-response non-uniformity (PRNU) is a distinctive image sensor characteristic, and an imaging device inadvertently introduces its sensor's PRNU into all media it captures. Therefore, the PRNU can be regarded as a camera fingerprint and used for source attribution. The imaging pipeline in a camera, however, involves various processing steps that are detrimental to PRNU estimation. In the context of photographic images, these challenges are successfully addressed and the method for estimating a sensor's PRNU pattern is well established. However, various additional challenges related to generation of videos remain largely untackled. With this perspective, this work introduces methods to mitigate disruptive effects of widely deployed H.264 and H.265 video compression standards on PRNU estimation. Our approach involves an intervention in the decoding process to eliminate a filtering procedure applied at the decoder to reduce blockiness. It also utilizes decoding parameters to develop a weighting scheme and adjust the contribution of video frames at the macroblock level to PRNU estimation process. Results obtained on videos captured by 28 cameras show that our approach increases the PRNU matching metric up to more than five times over the conventional estimation method tailored for photos

    Background-oriented schlieren (BOS) for scramjet inlet-isolator investigation

    Get PDF
    Background-oriented Schlieren (BOS) technique is a recently invented non-intrusive flow diagnostic method which has yet to be fully explored in its capabilities. In this paper, BOS technique has been applied for investigating the general flow field characteristics inside a generic scramjet inlet-isolator with Mach 5 flow. The difficulty in finding the delicate balance between measurement sensitivity and measurement area image focusing has been demonstrated. The differences between direct cross-correlation (DCC) and Fast Fourier Transform (FFT) raw data processing algorithm have also been demonstrated. As an exploratory study of BOS capability, this paper found that BOS is simple yet robust enough to be used to visualize complex flow in a scramjet inlet in hypersonic flow. However, in this case its quantitative data can be strongly affected by 3-dimensionality thus obscuring the density value with significant errors

    Visualizing Interstellar's Wormhole

    Get PDF
    Christopher Nolan's science fiction movie Interstellar offers a variety of opportunities for students in elementary courses on general relativity theory. This paper describes such opportunities, including: (i) At the motivational level, the manner in which elementary relativity concepts underlie the wormhole visualizations seen in the movie. (ii) At the briefest computational level, instructive calculations with simple but intriguing wormhole metrics, including, e.g., constructing embedding diagrams for the three-parameter wormhole that was used by our visual effects team and Christopher Nolan in scoping out possible wormhole geometries for the movie. (iii) Combining the proper reference frame of a camera with solutions of the geodesic equation, to construct a light-ray-tracing map backward in time from a camera's local sky to a wormhole's two celestial spheres. (iv) Implementing this map, for example in Mathematica, Maple or Matlab, and using that implementation to construct images of what a camera sees when near or inside a wormhole. (v) With the student's implementation, exploring how the wormhole's three parameters influence what the camera sees---which is precisely how Christopher Nolan, using our implementation, chose the parameters for \emph{Interstellar}'s wormhole. (vi) Using the student's implementation, exploring the wormhole's Einstein ring, and particularly the peculiar motions of star images near the ring; and exploring what it looks like to travel through a wormhole.Comment: 14 pages and 13 figures. In press at American Journal of Physics. Minor revisions; primarily insertion of a new, long reference 15 at the end of Section II.

    Investigation of adaptive optics imaging biomarkers for detecting pathological changes of the cone mosaic in patients with type 1 diabetes mellitus

    Get PDF
    Purpose To investigate a set of adaptive optics (AO) imaging biomarkers for the assessment of changes of the cone mosaic spatial arrangement in patients with type 1 diabetes mellitus (DM1). Methods 16 patients with 20/20 visual acuity and a diagnosis of DM1 in the past 8 years to 37 years and 20 age-matched healthy volunteers were recruited in this study. Cone density, cone spacing and Voronoi diagrams were calculated on 160x160 μm images of the cone mosaic acquired with an AO flood illumination retinal camera at 1.5 degrees eccentricity from the fovea along all retinal meridians. From the cone spacing measures and Voronoi diagrams, the linear dispersion index (LDi) and the heterogeneity packing index (HPi) were computed respectively. Logistic regression analysis was conducted to discriminate DM1 patients without diabetic retinopathy from controls using the cone metrics as predictors. Results Of the 16 DM1 patients, eight had no signs of diabetic retinopathy (noDR) and eight had mild nonproliferative diabetic retinopathy (NPDR) on fundoscopy. On average, cone density, LDi and HPi values were significantly different (P<0.05) between noDR or NPDR eyes and controls, with these differences increasing with duration of diabetes. However, each cone metric alone was not sufficiently sensitive to discriminate entirely between membership of noDR cases and controls. The complementary use of all the three cone metrics in the logistic regression model gained 100% accuracy to identify noDR cases with respect to controls. PLOS ONE | DOI:10.1371/journal.pone.0151380 March 10, 2016 1 / 14 OPEN ACCESS Citation: Lombardo M, Parravano M, Serrao S, Ziccardi L, Giannini D, Lombardo G (2016) Investigation of Adaptive Optics Imaging Biomarkers for Detecting Pathological Changes of the Cone Mosaic in Patients with Type 1 Diabetes Mellitus. PLoS ONE 11(3): e0151380. doi:10.1371/journal. pone.0151380 Editor: Knut Stieger, Justus-Liebig-University Giessen, GERMANY Received: December 17, 2015 Accepted: February 27, 2016 Published: March 10, 2016 Copyright: © 2016 Lombardo et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Data Availability Statement: All relevant data are within the paper and its Supporting Information files. Funding: Research for this work was supported by the Italian Ministry of Health (5x1000 funding), by the National Framework Program for Research and Innovation PON (grant n. 01_00110) and by Fondazione Roma. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Vision Engineering Italy srl funder provided support in the form of salaries for author GL, but did not have any Conclusion The present set of AO imaging biomarkers identified reliably abnormalities in the spatial arrangement of the parafoveal cones in DM1 patients, even when no signs of diabetic retinopathy were seen on fundoscopy
    corecore