66 research outputs found

    Privacy-preserving efficient searchable encryption

    Get PDF
    Data storage and computation outsourcing to third-party managed data centers, in environments such as Cloud Computing, is increasingly being adopted by individuals, organizations, and governments. However, as cloud-based outsourcing models expand to society-critical data and services, the lack of effective and independent control over security and privacy conditions in such settings presents significant challenges. An interesting solution to these issues is to perform computations on encrypted data, directly in the outsourcing servers. Such an approach benefits from not requiring major data transfers and decryptions, increasing performance and scalability of operations. Searching operations, an important application case when cloud-backed repositories increase in number and size, are good examples where security, efficiency, and precision are relevant requisites. Yet existing proposals for searching encrypted data are still limited from multiple perspectives, including usability, query expressiveness, and client-side performance and scalability. This thesis focuses on the design and evaluation of mechanisms for searching encrypted data with improved efficiency, scalability, and usability. There are two particular concerns addressed in the thesis: on one hand, the thesis aims at supporting multiple media formats, especially text, images, and multimodal data (i.e. data with multiple media formats simultaneously); on the other hand the thesis addresses client-side overhead, and how it can be minimized in order to support client applications executing in both high-performance desktop devices and resource-constrained mobile devices. From the research performed to address these issues, three core contributions were developed and are presented in the thesis: (i) CloudCryptoSearch, a middleware system for storing and searching text documents with privacy guarantees, while supporting multiple modes of deployment (user device, local proxy, or computational cloud) and exploring different tradeoffs between security, usability, and performance; (ii) a novel framework for efficiently searching encrypted images based on IES-CBIR, an Image Encryption Scheme with Content-Based Image Retrieval properties that we also propose and evaluate; (iii) MIE, a Multimodal Indexable Encryption distributed middleware that allows storing, sharing, and searching encrypted multimodal data while minimizing client-side overhead and supporting both desktop and mobile devices

    Content Based Image Retrieval (CBIR) in Remote Clinical Diagnosis and Healthcare

    Full text link
    Content-Based Image Retrieval (CBIR) locates, retrieves and displays images alike to one given as a query, using a set of features. It demands accessible data in medical archives and from medical equipment, to infer meaning after some processing. A problem similar in some sense to the target image can aid clinicians. CBIR complements text-based retrieval and improves evidence-based diagnosis, administration, teaching, and research in healthcare. It facilitates visual/automatic diagnosis and decision-making in real-time remote consultation/screening, store-and-forward tests, home care assistance and overall patient surveillance. Metrics help comparing visual data and improve diagnostic. Specially designed architectures can benefit from the application scenario. CBIR use calls for file storage standardization, querying procedures, efficient image transmission, realistic databases, global availability, access simplicity, and Internet-based structures. This chapter recommends important and complex aspects required to handle visual content in healthcare.Comment: 28 pages, 6 figures, Book Chapter from "Encyclopedia of E-Health and Telemedicine

    AN OBJECT-BASED MULTIMEDIA FORENSIC ANALYSIS TOOL

    Get PDF
    With the enormous increase in the use and volume of photographs and videos, multimedia-based digital evidence now plays an increasingly fundamental role in criminal investigations. However, with the increase, it is becoming time-consuming and costly for investigators to analyse content manually. Within the research community, focus on multimedia content has tended to be on highly specialised scenarios such as tattoo identification, number plate recognition, and child exploitation. An investigator’s ability to search multimedia data based on keywords (an approach that already exists within forensic tools for character-based evidence) could provide a simple and effective approach for identifying relevant imagery. This thesis proposes and demonstrates the value of using a multi-algorithmic approach via fusion to achieve the best image annotation performance. The results show that from existing systems, the highest average recall was achieved by Imagga with 53% while the proposed multi-algorithmic system achieved 77% across the select datasets. Subsequently, a novel Object-based Multimedia Forensic Analysis Tool (OM-FAT) architecture was proposed. The OM-FAT automates the identification and extraction of annotation-based evidence from multimedia content. Besides making multimedia data searchable, the OM-FAT system enables investigators to perform various forensic analyses (search using annotations, metadata, object matching, text similarity and geo-tracking) to help investigators understand the relationship between artefacts, thus reducing the time taken to perform an investigation and the investigator’s cognitive load. It will enable investigators to ask higher-level and more abstract questions of the data, then find answers to the essential questions in the investigation: what, who, why, how, when, and where. The research includes a detailed illustration of the architectural requirements, engines, and complete design of the system workflow, which represents a full case management system. To highlight the ease of use and demonstrate the system’s ability to correlate between multimedia, a prototype was developed. The prototype integrates the functionalities of the OM-FAT tool and demonstrates how the system would help digital investigators find pieces of evidence among a large number of images starting from the acquisition stage and ending in the reporting stage with less effort and in less time.The Higher Committee for Education Development in Iraq (HCED

    The Design of a Multimedia-Forensic Analysis Tool (M-FAT)

    Get PDF
    Digital forensics has become a fundamental requirement for law enforcement due to the growing volume of cyber and computer-assisted crime. Whilst existing commercial tools have traditionally focused upon string-based analyses (e.g., regular expressions, keywords), less effort has been placed towards the development of multimedia-based analyses. Within the research community, more focus has been attributed to the analysis of multimedia content; they tend to focus upon highly specialised specific scenarios such as tattoo identification, number plate recognition, suspect face recognition and manual annotation of images. Given the everincreasing volume of multimedia content, it is essential that a holistic Multimedia-Forensic Analysis Tool (M-FAT) is developed to extract, index, analyse the recovered images and provide an investigator with an environment with which to ask more abstract and cognitively challenging questions of the data. This paper proposes such a system, focusing upon a combination of object and facial recognition to provide a robust system. This system will enable investigators to perform a variety of forensic analyses that aid in reducing the time, effort and cognitive load being placed on the investigator to identify relevant evidence

    CHORUS Deliverable 2.2: Second report - identification of multi-disciplinary key issues for gap analysis toward EU multimedia search engines roadmap

    Get PDF
    After addressing the state-of-the-art during the first year of Chorus and establishing the existing landscape in multimedia search engines, we have identified and analyzed gaps within European research effort during our second year. In this period we focused on three directions, notably technological issues, user-centred issues and use-cases and socio- economic and legal aspects. These were assessed by two central studies: firstly, a concerted vision of functional breakdown of generic multimedia search engine, and secondly, a representative use-cases descriptions with the related discussion on requirement for technological challenges. Both studies have been carried out in cooperation and consultation with the community at large through EC concertation meetings (multimedia search engines cluster), several meetings with our Think-Tank, presentations in international conferences, and surveys addressed to EU projects coordinators as well as National initiatives coordinators. Based on the obtained feedback we identified two types of gaps, namely core technological gaps that involve research challenges, and “enablers”, which are not necessarily technical research challenges, but have impact on innovation progress. New socio-economic trends are presented as well as emerging legal challenges

    Semantic multimedia modelling & interpretation for annotation

    Get PDF
    The emergence of multimedia enabled devices, particularly the incorporation of cameras in mobile phones, and the accelerated revolutions in the low cost storage devices, boosts the multimedia data production rate drastically. Witnessing such an iniquitousness of digital images and videos, the research community has been projecting the issue of its significant utilization and management. Stored in monumental multimedia corpora, digital data need to be retrieved and organized in an intelligent way, leaning on the rich semantics involved. The utilization of these image and video collections demands proficient image and video annotation and retrieval techniques. Recently, the multimedia research community is progressively veering its emphasis to the personalization of these media. The main impediment in the image and video analysis is the semantic gap, which is the discrepancy among a user’s high-level interpretation of an image and the video and the low level computational interpretation of it. Content-based image and video annotation systems are remarkably susceptible to the semantic gap due to their reliance on low-level visual features for delineating semantically rich image and video contents. However, the fact is that the visual similarity is not semantic similarity, so there is a demand to break through this dilemma through an alternative way. The semantic gap can be narrowed by counting high-level and user-generated information in the annotation. High-level descriptions of images and or videos are more proficient of capturing the semantic meaning of multimedia content, but it is not always applicable to collect this information. It is commonly agreed that the problem of high level semantic annotation of multimedia is still far from being answered. This dissertation puts forward approaches for intelligent multimedia semantic extraction for high level annotation. This dissertation intends to bridge the gap between the visual features and semantics. It proposes a framework for annotation enhancement and refinement for the object/concept annotated images and videos datasets. The entire theme is to first purify the datasets from noisy keyword and then expand the concepts lexically and commonsensical to fill the vocabulary and lexical gap to achieve high level semantics for the corpus. This dissertation also explored a novel approach for high level semantic (HLS) propagation through the images corpora. The HLS propagation takes the advantages of the semantic intensity (SI), which is the concept dominancy factor in the image and annotation based semantic similarity of the images. As we are aware of the fact that the image is the combination of various concepts and among the list of concepts some of them are more dominant then the other, while semantic similarity of the images are based on the SI and concept semantic similarity among the pair of images. Moreover, the HLS exploits the clustering techniques to group similar images, where a single effort of the human experts to assign high level semantic to a randomly selected image and propagate to other images through clustering. The investigation has been made on the LabelMe image and LabelMe video dataset. Experiments exhibit that the proposed approaches perform a noticeable improvement towards bridging the semantic gap and reveal that our proposed system outperforms the traditional systems

    Efficient Decision Support Systems

    Get PDF
    This series is directed to diverse managerial professionals who are leading the transformation of individual domains by using expert information and domain knowledge to drive decision support systems (DSSs). The series offers a broad range of subjects addressed in specific areas such as health care, business management, banking, agriculture, environmental improvement, natural resource and spatial management, aviation administration, and hybrid applications of information technology aimed to interdisciplinary issues. This book series is composed of three volumes: Volume 1 consists of general concepts and methodology of DSSs; Volume 2 consists of applications of DSSs in the biomedical domain; Volume 3 consists of hybrid applications of DSSs in multidisciplinary domains. The book is shaped decision support strategies in the new infrastructure that assists the readers in full use of the creative technology to manipulate input data and to transform information into useful decisions for decision makers

    Context Awareness and Discovery for Helping the Blind

    Full text link
    • …
    corecore