70,149 research outputs found

    A Watermark-Based in-Situ Access Control Model for Image Big Data

    Get PDF
    When large images are used for big data analysis, they impose new challenges in protecting image privacy. For example, a geographic image may consist of several sensitive areas or layers. When it is uploaded into servers, the image will be accessed by diverse subjects. Traditional access control methods regulate access privileges to a single image, and their access control strategies are stored in servers, which imposes two shortcomings: (1) fine-grained access control is not guaranteed for areas/layers in a single image that need to maintain secret for different roles; and (2) access control policies that are stored in servers suffers from multiple attacks (e.g., transferring attacks). In this paper, we propose a novel watermark-based access control model in which access control policies are associated with objects being accessed (called an in-situ model). The proposed model integrates access control policies as watermarks within images, without relying on the availability of servers or connecting networks. The access control for images is still maintained even though images are redistributed again to further subjects. Therefore, access control policies can be delivered together with the big data of images. Moreover, we propose a hierarchical key-role-area model for fine-grained encryption, especially for large size images such as geographic maps. The extensive analysis justifies the security and performance of the proposed model

    Fine-grained Categorization and Dataset Bootstrapping using Deep Metric Learning with Humans in the Loop

    Full text link
    Existing fine-grained visual categorization methods often suffer from three challenges: lack of training data, large number of fine-grained categories, and high intraclass vs. low inter-class variance. In this work we propose a generic iterative framework for fine-grained categorization and dataset bootstrapping that handles these three challenges. Using deep metric learning with humans in the loop, we learn a low dimensional feature embedding with anchor points on manifolds for each category. These anchor points capture intra-class variances and remain discriminative between classes. In each round, images with high confidence scores from our model are sent to humans for labeling. By comparing with exemplar images, labelers mark each candidate image as either a "true positive" or a "false positive". True positives are added into our current dataset and false positives are regarded as "hard negatives" for our metric learning model. Then the model is retrained with an expanded dataset and hard negatives for the next round. To demonstrate the effectiveness of the proposed framework, we bootstrap a fine-grained flower dataset with 620 categories from Instagram images. The proposed deep metric learning scheme is evaluated on both our dataset and the CUB-200-2001 Birds dataset. Experimental evaluations show significant performance gain using dataset bootstrapping and demonstrate state-of-the-art results achieved by the proposed deep metric learning methods.Comment: 10 pages, 9 figures, CVPR 201

    Security oriented e-infrastructures supporting neurological research and clinical trials

    Get PDF
    The neurological and wider clinical domains stand to gain greatly from the vision of the grid in providing seamless yet secure access to distributed, heterogeneous computational resources and data sets. Whilst a wealth of clinical data exists within local, regional and national healthcare boundaries, access to and usage of these data sets demands that fine grained security is supported and subsequently enforced. This paper explores the security challenges of the e-health domain, focusing in particular on authorization. The context of these explorations is the MRC funded VOTES (Virtual Organisations for Trials and Epidemiological Studies) and the JISC funded GLASS (Glasgow early adoption of Shibboleth project) which are developing Grid infrastructures for clinical trials with case studies in the brain trauma domain

    Using shared-data localization to reduce the cost of inspector-execution in unified-parallel-C programs

    Get PDF
    Programs written in the Unified Parallel C (UPC) language can access any location of the entire local and remote address space via read/write operations. However, UPC programs that contain fine-grained shared accesses can exhibit performance degradation. One solution is to use the inspector-executor technique to coalesce fine-grained shared accesses to larger remote access operations. A straightforward implementation of the inspector executor transformation results in excessive instrumentation that hinders performance.; This paper addresses this issue and introduces various techniques that aim at reducing the generated instrumentation code: a shared-data localization transformation based on Constant-Stride Linear Memory Descriptors (CSLMADs) [S. Aarseth, Gravitational N-Body Simulations: Tools and Algorithms, Cambridge Monographs on Mathematical Physics, Cambridge University Press, 2003.], the inlining of data locality checks and the usage of an index vector to aggregate the data. Finally, the paper introduces a lightweight loop code motion transformation to privatize shared scalars that were propagated through the loop body.; A performance evaluation, using up to 2048 cores of a POWER 775, explores the impact of each optimization and characterizes the overheads of UPC programs. It also shows that the presented optimizations increase performance of UPC programs up to 1.8 x their UPC hand-optimized counterpart for applications with regular accesses and up to 6.3 x for applications with irregular accesses.Peer ReviewedPostprint (author's final draft
    • …
    corecore