131 research outputs found

    Optimization of decentralized quantizers in rate constrained data fusion systems

    Get PDF

    Multimedia Protection using Content and Embedded Fingerprints

    Get PDF
    Improved digital connectivity has made the Internet an important medium for multimedia distribution and consumption in recent years. At the same time, this increased proliferation of multimedia has raised significant challenges in secure multimedia distribution and intellectual property protection. This dissertation examines two complementary aspects of the multimedia protection problem that utilize content fingerprints and embedded collusion-resistant fingerprints. The first aspect considered is the automated identification of multimedia using content fingerprints, which is emerging as an important tool for detecting copyright violations on user generated content websites. A content fingerprint is a compact identifier that captures robust and distinctive properties of multimedia content, which can be used for uniquely identifying the multimedia object. In this dissertation, we describe a modular framework for theoretical modeling and analysis of content fingerprinting techniques. Based on this framework, we analyze the impact of distortions in the features on the corresponding fingerprints and also consider the problem of designing a suitable quantizer for encoding the features in order to improve the identification accuracy. The interaction between the fingerprint designer and a malicious adversary seeking to evade detection is studied under a game-theoretic framework and optimal strategies for both parties are derived. We then focus on analyzing and understanding the matching process at the fingerprint level. Models for fingerprints with different types of correlations are developed and the identification accuracy under each model is examined. Through this analysis we obtain useful guidelines for designing practical systems and also uncover connections to other areas of research. A complementary problem considered in this dissertation concerns tracing the users responsible for unauthorized redistribution of multimedia. Collusion-resistant fingerprints, which are signals that uniquely identify the recipient, are proactively embedded in the multimedia before redistribution and can be used for identifying the malicious users. We study the problem of designing collusion resistant fingerprints for embedding in compressed multimedia. Our study indicates that directly adapting traditional fingerprinting techniques to this new setting of compressed multimedia results in low collusion resistance. To withstand attacks, we propose an anti-collusion dithering technique for embedding fingerprints that significantly improves the collusion resistance compared to traditional fingerprints

    Trembling triggers: exploring the sensitivity of backdoors in DNN-based face recognition

    Get PDF
    Abstract Backdoor attacks against supervised machine learning methods seek to modify the training samples in such a way that, at inference time, the presence of a specific pattern (trigger) in the input data causes misclassifications to a target class chosen by the adversary. Successful backdoor attacks have been presented in particular for face recognition systems based on deep neural networks (DNNs). These attacks were evaluated for identical triggers at training and inference time. However, the vulnerability to backdoor attacks in practice crucially depends on the sensitivity of the backdoored classifier to approximate trigger inputs. To assess this, we study the response of a backdoored DNN for face recognition to trigger signals that have been transformed with typical image processing operators of varying strength. Results for different kinds of geometric and color transformations suggest that in particular geometric misplacements and partial occlusions of the trigger limit the effectiveness of the backdoor attacks considered. Moreover, our analysis reveals that the spatial interaction of the trigger with the subject's face affects the success of the attack. Experiments with physical triggers inserted in live acquisitions validate the observed response of the DNN when triggers are inserted digitally

    Investigation of LANDSAT follow-on thematic mapper spatial, radiometric and spectral resolution

    Get PDF
    The author has identified the following significant results. Fine resolution M7 multispectral scanner data collected during the Corn Blight Watch Experiment in 1971 served as the basis for this study. Different locations and times of year were studied. Definite improvement using 30-40 meter spatial resolution over present LANDSAT 1 resolution and over 50-60 meter resolution was observed, using crop area mensuration as the measure. Simulation studies carried out to extrapolate the empirical results to a range of field size distributions confirmed this effect, showing the improvement to be most pronounced for field sizes of 1-4 hectares. Radiometric sensitivity study showed significant degradation of crop classification accuracy immediately upon relaxation from the nominally specified values of 0.5% noise equivalent reflectance. This was especially the case for data which were spectrally similar such as that collected early in the growing season and also when attempting to accomplish crop stress detection

    DeepDyve: Dynamic Verification for Deep Neural Networks

    Full text link
    Deep neural networks (DNNs) have become one of the enabling technologies in many safety-critical applications, e.g., autonomous driving and medical image analysis. DNN systems, however, suffer from various kinds of threats, such as adversarial example attacks and fault injection attacks. While there are many defense methods proposed against maliciously crafted inputs, solutions against faults presented in the DNN system itself (e.g., parameters and calculations) are far less explored. In this paper, we develop a novel lightweight fault-tolerant solution for DNN-based systems, namely DeepDyve, which employs pre-trained neural networks that are far simpler and smaller than the original DNN for dynamic verification. The key to enabling such lightweight checking is that the smaller neural network only needs to produce approximate results for the initial task without sacrificing fault coverage much. We develop efficient and effective architecture and task exploration techniques to achieve optimized risk/overhead trade-off in DeepDyve. Experimental results show that DeepDyve can reduce 90% of the risks at around 10% overhead

    Highlighting dissimilarity in medical images using hierarchical clustering based segmentation (HCS).

    Get PDF
    Tissue abnormality in a medical image is usually related to a dissimilar part of an otherwise homogeneous image. The dissimilarity may be subtle or strong depending on the medical modality and the type of abnormal tissue. Dissimilarity within an otherwise homogeneous area of an image may not always be due to tissue abnormality. It might be due to image noise or due to variability within the same tissue type. Given this situation it is almost impossible to design and implement a generic segmentation process that will consistently give a single appropriate solution under all conditions. Hence a dissimilarity highlighting process that yields a hierarchy of segmentation results is more useful. This would benefit from high level human interaction to select the appropriate image segmentation for a particular application, because one of the capabilities of the human vision process when visualising images is its ability to visualise them at different levels of details.The purpose of this thesis is to design and implement a segmentation procedure to resemble the capability of the human vision system's ability to generate multiple solutions of varying resolutions. To this end, the main objectives for this study were: (i) to design a segmentation process that would be unsupervised and completely data driven. (ii) to design a segmentation process that would automatically and consistently generate a hierarchy of segmentation results. In order to achieve these objectives a hierarchical clustering based segmentation (HCS) process was designed and implemented. The developed HCS process partitioned the images into their constituent regions at hierarchical levels of allowable dissimilarity between the different spatially adjacent or disjoint regions. At any particular level in the hierarchy the segmentation process clustered together all the pixels and/or regions that had dissimilarity among them which was less than or equal to the dissimilarity allowed for that level. The clustering process was designed in such a way that the merging of the clusters did not depend on the order in which the clusters were evaluated.The HCS process developed was used to process images of different medical modalities and the results obtained are summarised below: (i) It was successfully used to highlight hard to visualise stroke affected areas in T2 weighted MR images confirmed by the diffusion weighted scans of the same areas of the brain. (ii) It was used to highlight dissimilarities in the MRI, CT and ultrasound images and the results were validated by the radiologists. It processed medical image data and consistently produced a hierarchy of segmentation results but did not give a diagnosis. This was left for the experts to make use of the results and incorporate these with their own knowledge to arrive upon a diagnosis. Thus the process acts as an effective computer aided detection (CAD) tool.The unique features of the designed and implemented HCS process are: (i) The segmentation process is unsupervised, completely data driven and can be applied to any medical modality, with equal success, without any prior information about the image data(ii) The merging routines can evaluate and merge spatially adjacent and disjoint similar regions and consistently give a hierarchy of segmentation results. (iii) The designed merging process can yield crisp border delineation between the regions

    Multispectral scanner data applications evaluation. Volume 1: User applications study

    Get PDF
    A six-month systems study of earth resource surveys from satellites was conducted and is reported. SKYLAB S-192 multispectral scanner (MSS) data were used as a baseline to aid in evaluating the characteristics of future systems using satellite MSS sensors. The study took the viewpoint that overall system (sensor and processing) characteristics and parameter values should be determined largely by user requirements for automatic information extraction performance in quasi-operational earth resources surveys, the other major factor being hardware limitations imposed by state-of-the-art technology and cost. The objective was to use actual aircraft and spacecraft MSS data to outline parametrically the trade-offs between user performance requirements and hardware performance and limitations so as to allow subsequent evaluation of compromises which must be made in deciding what system(s) to build
    corecore