18,269 research outputs found

    An In-Depth Study on Open-Set Camera Model Identification

    Full text link
    Camera model identification refers to the problem of linking a picture to the camera model used to shoot it. As this might be an enabling factor in different forensic applications to single out possible suspects (e.g., detecting the author of child abuse or terrorist propaganda material), many accurate camera model attribution methods have been developed in the literature. One of their main drawbacks, however, is the typical closed-set assumption of the problem. This means that an investigated photograph is always assigned to one camera model within a set of known ones present during investigation, i.e., training time, and the fact that the picture can come from a completely unrelated camera model during actual testing is usually ignored. Under realistic conditions, it is not possible to assume that every picture under analysis belongs to one of the available camera models. To deal with this issue, in this paper, we present the first in-depth study on the possibility of solving the camera model identification problem in open-set scenarios. Given a photograph, we aim at detecting whether it comes from one of the known camera models of interest or from an unknown one. We compare different feature extraction algorithms and classifiers specially targeting open-set recognition. We also evaluate possible open-set training protocols that can be applied along with any open-set classifier, observing that a simple of those alternatives obtains best results. Thorough testing on independent datasets shows that it is possible to leverage a recently proposed convolutional neural network as feature extractor paired with a properly trained open-set classifier aiming at solving the open-set camera model attribution problem even to small-scale image patches, improving over state-of-the-art available solutions.Comment: Published through IEEE Access journa

    Are Social Networks Watermarking Us or Are We (Unawarely) Watermarking Ourself?

    Get PDF
    In the last decade, Social Networks (SNs) have deeply changed many aspects of society, and one of the most widespread behaviours is the sharing of pictures. However, malicious users often exploit shared pictures to create fake profiles, leading to the growth of cybercrime. Thus, keeping in mind this scenario, authorship attribution and verification through image watermarking techniques are becoming more and more important. In this paper, we firstly investigate how thirteen of the most popular SNs treat uploaded pictures in order to identify a possible implementation of image watermarking techniques by respective SNs. Second, we test the robustness of several image watermarking algorithms on these thirteen SNs. Finally, we verify whether a method based on the Photo-Response Non-Uniformity (PRNU) technique, which is usually used in digital forensic or image forgery detection activities, can be successfully used as a watermarking approach for authorship attribution and verification of pictures on SNs. The proposed method is sufficiently robust, in spite of the fact that pictures are often downgraded during the process of uploading to the SNs. Moreover, in comparison to conventional watermarking methods the proposed method can successfully pass through different SNs, solving related problems such as profile linking and fake profile detection. The results of our analysis on a real dataset of 8400 pictures show that the proposed method is more effective than other watermarking techniques and can help to address serious questions about privacy and security on SNs. Moreover, the proposed method paves the way for the definition of multi-factor online authentication mechanisms based on robust digital features

    Are Social Networks Watermarking Us or Are We (Unawarely) Watermarking Ourself?

    Full text link
    In the last decade, Social Networks (SNs) have deeply changed many aspects of society, and one of the most widespread behaviours is the sharing of pictures. However, malicious users often exploit shared pictures to create fake profiles leading to the growth of cybercrime. Thus, keeping in mind this scenario, authorship attribution and verification through image watermarking techniques are becoming more and more important. In this paper, firstly, we investigate how 13 most popular SNs treat the uploaded pictures, in order to identify a possible implementation of image watermarking techniques by respective SNs. Secondly, on these 13 SNs, we test the robustness of several image watermarking algorithms. Finally, we verify whether a method based on the Photo-Response Non-Uniformity (PRNU) technique can be successfully used as a watermarking approach for authorship attribution and verification of pictures on SNs. The proposed method is robust enough in spite of the fact that the pictures get downgraded during the uploading process by SNs. The results of our analysis on a real dataset of 8,400 pictures show that the proposed method is more effective than other watermarking techniques and can help to address serious questions about privacy and security on SNs.Comment: 43 pages, 6 figure

    A Molecule‐Based Single‐Photon Source Applied in Quantum Radiometry

    Get PDF
    Single photon sources (SPSs) based on quantum emitters hold promise in quantum radiometry as metrology standard for photon fluxes at the low light level. Ideally this requires control over the photon flux in a wide dynamic range, sub-Poissonian photon statistics and narrow-band emission spectrum. In this work, a monochromatic single-photon source based on an organic dye molecule is presented, whose photon flux is traceably measured to be adjustable between 144 000 and 1320 000 photons per second at a wavelength of (785.6 +/- 0.1) nm, corresponding to an optical radiant flux between 36.5 fW and 334 fW. The high purity of the single-photon stream is verified, with a second-order autocorrelation function at zero time delay below 0.1 throughout the whole range. Featuring an appropriate combination of emission properties, the molecular SPS shows here application in the calibration of a silicon Single-Photon Avalanche Detector (SPAD) against a low-noise analog silicon photodiode traceable to the primary standard for optical radiant flux (i.e. the cryogenic radiometer). Due to the narrow bandwidth of the source, corrections to the SPAD detection efficiency arising from the spectral power distribution are negligible. With this major advantage, the developed device may finally realize a low-photon-flux standard source for quantum radiometry

    EpiCollect+: linking smartphones to web applications for complex data collection projects.

    Get PDF
    © 2014 Aanensen DM et al.Previously, we have described the development of the generic mobile phone data gathering tool, EpiCollect, and an associated web application, providing two-way communication between multiple data gatherers and a project database. This software only allows data collection on the phone using a single questionnaire form that is tailored to the needs of the user (including a single GPS point and photo per entry), whereas many applications require a more complex structure, allowing users to link a series of forms in a linear or branching hierarchy, along with the addition of any number of media types accessible from smartphones and/or tablet devices (e.g., GPS, photos, videos, sound clips and barcode scanning). A much enhanced version of EpiCollect has been developed (EpiCollect+). The individual data collection forms in EpiCollect+ provide more design complexity than the single form used in EpiCollect, and the software allows the generation of complex data collection projects through the ability to link many forms together in a linear (or branching) hierarchy. Furthermore, EpiCollect+ allows the collection of multiple media types as well as standard text fields, increased data validation and form logic. The entire process of setting up a complex mobile phone data collection project to the specification of a user (project and form definitions) can be undertaken at the EpiCollect+ website using a simple drag and drop procedure, with visualisation of the data gathered using Google Maps and charts at the project website. EpiCollect+ is suitable for situations where multiple users transmit complex data by mobile phone (or other Android devices) to a single project web database and is already being used for a range of field projects, particularly public health projects in sub-Saharan Africa. However, many uses can be envisaged from education, ecology and epidemiology to citizen science

    BOMB SIGHT – Mapping WW2 Bomb Census

    Get PDF

    Rock falls impacting railway tracks. Detection analysis through an artificial intelligence camera prototype

    Get PDF
    During the last few years, several approaches have been proposed to improve early warning systems for managing geological risk due to landslides, where important infrastructures (such as railways, highways, pipelines, and aqueducts) are exposed elements. In this regard, an Artificial intelligence Camera Prototype (AiCP) for real-time monitoring has been integrated in a multisensor monitoring system devoted to rock fall detection. An abandoned limestone quarry was chosen at Acuto (central Italy) as test-site for verifying the reliability of the integratedmonitoring system. A portion of jointed rockmass, with dimensions suitable for optical monitoring, was instrumented by extensometers. One meter of railway track was used as a target for fallen blocks and a weather station was installed nearby. Main goals of the test were (i) evaluating the reliability of the AiCP and (ii) detecting rock blocks that reach the railway track by the AiCP. At this aim, several experiments were carried out by throwing rock blocks over the railway track. During these experiments, the AiCP detected the blocks and automatically transmitted an alarm signal

    Interoperability in IoT through the semantic profiling of objects

    Get PDF
    The emergence of smarter and broader people-oriented IoT applications and services requires interoperability at both data and knowledge levels. However, although some semantic IoT architectures have been proposed, achieving a high degree of interoperability requires dealing with a sea of non-integrated data, scattered across vertical silos. Also, these architectures do not fit into the machine-to-machine requirements, as data annotation has no knowledge on object interactions behind arriving data. This paper presents a vision of how to overcome these issues. More specifically, the semantic profiling of objects, through CoRE related standards, is envisaged as the key for data integration, allowing more powerful data annotation, validation, and reasoning. These are the key blocks for the development of intelligent applications.Portuguese Science and Technology Foundation (FCT) [UID/MULTI/00631/2013

    Status and Plans for the Array Control and Data Acquisition System of the Cherenkov Telescope Array

    Full text link
    The Cherenkov Telescope Array (CTA) is the next-generation atmospheric Cherenkov gamma-ray observatory. CTA will consist of two installations, one in the northern, and the other in the southern hemisphere, containing tens of telescopes of different sizes. The CTA performance requirements and the inherent complexity associated with the operation, control and monitoring of such a large distributed multi-telescope array leads to new challenges in the field of the gamma-ray astronomy. The ACTL (array control and data acquisition) system will consist of the hardware and software that is necessary to control and monitor the CTA arrays, as well as to time-stamp, read-out, filter and store -at aggregated rates of few GB/s- the scientific data. The ACTL system must be flexible enough to permit the simultaneous automatic operation of multiple sub-arrays of telescopes with a minimum personnel effort on site. One of the challenges of the system is to provide a reliable integration of the control of a large and heterogeneous set of devices. Moreover, the system is required to be ready to adapt the observation schedule, on timescales of a few tens of seconds, to account for changing environmental conditions or to prioritize incoming scientific alerts from time-critical transient phenomena such as gamma ray bursts. This contribution provides a summary of the main design choices and plans for building the ACTL system.Comment: In Proceedings of the 34th International Cosmic Ray Conference (ICRC2015), The Hague, The Netherlands. All CTA contributions at arXiv:1508.0589
    corecore