75 research outputs found

    Indexing Uncertain Categorical Data over Distributed Environment

    Get PDF
    International audienceToday, a large amount of uncertain data is produced by several applications where the management systems of traditional databases incuding indexing methods are not suitable to handle such type of data. In this paper, we propose an inverted based index method for effciently searching uncertain categorical data over distributed environments. We adress two kinds of query over the distributed uncertain databases, one a distributed probabilis-tic thresholds query, where all results sastisfying the query with probablities that meet a probablistic threshold requirement are returned, and another a distributed top k-queries, where all results optimizing the transfer of the tuples and the time treatment are returned

    Personal information privacy: what's next?

    Get PDF
    In recent events, user-privacy has been a main focus for all technological and data-holding companies, due to the global interest in protecting personal information. Regulations like the General Data Protection Regulation (GDPR) set firm laws and penalties around the handling and misuse of user data. These privacy rules apply regardless of the data structure, whether it being structured or unstructured. In this work, we perform a summary of the available algorithms for providing privacy in structured data, and analyze the popular tools that handle privacy in textual data; namely medical data. We found that although these tools provide adequate results in terms of de-identifying medical records by removing personal identifyers (HIPAA PHI), they fall short in terms of being generalizable to satisfy nonmedical fields. In addition, the metrics used to measure the performance of these privacy algorithms don't take into account the differences in significance that every identifier has. Finally, we propose the concept of a domain-independent adaptable system that learns the significance of terms in a given text, in terms of person identifiability and text utility, and is then able to provide metrics to help find a balance between user privacy and data usability

    Privacy-Aware Web Service Protocol Replaceability

    Get PDF
    ISBN: 0-7695-2924-0International audienceBusiness protocols are becoming a necessary part of Web services description. Many works investigate mechanisms for analyzing the compatibility and the substitution (i.e., replaceability) of Web services based on their functional properties. In this paper, we focus on the replaceability analysis. Whether a service can replace another depends not only on their functional properties but also on non functional requirements (e.g., privacy policies). We propose a privacy aware protocol replaceability approach to extend the earlier work of Web services replaceability by privacy properties. We introduce a rule-based privacy model and we extend business protocols, leading to what we call Private Business Protocols. Finally, a private replaceability analysis of private business protocols is discussed. We mainly investigate compatibility issues, that is whether one private business protocol can support the same set of conversations with respect to the privacy requirements

    Towards Big Data in Medical Imaging

    Get PDF
    National audienceWe present our vision to implement a big medical imaging platform to improve medical diagnosis. We aim to link multi-scale and multimodal images through open data and ontologies to discover new correlations and scientific knowledges. The platform is based on CIRRUS, a Sorbonne-Paris-Cité private cloud for research

    PREDON Scientific Data Preservation 2014

    Get PDF
    LPSC14037Scientific data collected with modern sensors or dedicated detectors exceed very often the perimeter of the initial scientific design. These data are obtained more and more frequently with large material and human efforts. A large class of scientific experiments are in fact unique because of their large scale, with very small chances to be repeated and to superseded by new experiments in the same domain: for instance high energy physics and astrophysics experiments involve multi-annual developments and a simple duplication of efforts in order to reproduce old data is simply not affordable. Other scientific experiments are in fact unique by nature: earth science, medical sciences etc. since the collected data is "time-stamped" and thereby non-reproducible by new experiments or observations. In addition, scientific data collection increased dramatically in the recent years, participating to the so-called "data deluge" and inviting for common reflection in the context of "big data" investigations. The new knowledge obtained using these data should be preserved long term such that the access and the re-use are made possible and lead to an enhancement of the initial investment. Data observatories, based on open access policies and coupled with multi-disciplinary techniques for indexing and mining may lead to truly new paradigms in science. It is therefore of outmost importance to pursue a coherent and vigorous approach to preserve the scientific data at long term. The preservation remains nevertheless a challenge due to the complexity of the data structure, the fragility of the custom-made software environments as well as the lack of rigorous approaches in workflows and algorithms. To address this challenge, the PREDON project has been initiated in France in 2012 within the MASTODONS program: a Big Data scientific challenge, initiated and supported by the Interdisciplinary Mission of the National Centre for Scientific Research (CNRS). PREDON is a study group formed by researchers from different disciplines and institutes. Several meetings and workshops lead to a rich exchange in ideas, paradigms and methods. The present document includes contributions of the participants to the PREDON Study Group, as well as invited papers, related to the scientific case, methodology and technology. This document should be read as a "facts finding" resource pointing to a concrete and significant scientific interest for long term research data preservation, as well as to cutting edge methods and technologies to achieve this goal. A sustained, coherent and long term action in the area of scientific data preservation would be highly beneficial

    Semantic and Privacy-Aware Methods for Data Access

    No full text

    Abstract A factorisation model of robotic tasks

    No full text
    The implantation of programs which gives a robot the ability to perform a non-repetitive task (task not completely de®ned called also unexpected), was hindered by a complex problem: the dif®culty met by the classical method in programming (procedural) to formulate a task which the evolution model does not obey to an algorithmic pre-established design. As a part of that the aim of this paper is to propose an analysis approach which lies on a mechanism of factorisation of the complex task. The idea developed consists of subdividing the activity of programming into two steps. A descriptive step which allows the formulation of a complex task using a functional approach without integrating any element to the construction of an executing program and a constructive step which develops a program using the preceding formulation. This program expresses, more or less explicitly, the way of solving different problems posed by the execution of the task at the level of a robot. The aspect of time is introduced as a logical form in the last step for the sequencing of actions while executing a task. q 199

    A task memory Network (approach for composite Web services selection)

    No full text
    Les activité typiques d'une structure qui réalise de la composition de services Web sont : la définition de la structure d'un service Web composite, la sélection de services Web courants à lier au service Web composite, l'orchestration de l'exécution d'un service Web composite. Beaucoup de travaux de recherche existent dans le domaine général de la découverte, sélection et composition de services Web. En général, ces approches se basent sur la description de techniques de matching. Dans cette thèse, on présente un approche qui est complémentaire à tous les approches classiques sur la sélection de services. En particulier, on propose d'ajouter la connaissance sur comment l'activité de sélection de services Web a été conduite dans le passé. Cette connaissance est utilisé pour améliorer l'efficacité de cette activité même. Pour cet objective, on définit le concept de Mémoire de la Tâche. La mémoire de la tâche représente la connaissance sur les configurations de services et le contexte dans lequel ces configurations ont été consideré s les plus approprié par les utilisateurs. En s'inspirant de la nouvelle vague apporté par le Web 2.0 qui favorise un nouveau paradigme dans lequel les fournisseurs et les utilisateurs finals (y compris les utilisateurs pas experts) facilement et librement partagent de l'information et de services sur le Web, nous introduisons le concept de Forum de Tâches. Essentiellement, un forum de tâches ofreun moyen de collecter et partager des définitions de tâches et des mémoires de tâches, d'un domaine spécifique, entre les utilisateurs. Cela permet aux utilisateurs de réutiliser et personnaliser des définitions partagés au lieu de formuler des définitions de tâches à partir de zéro. Un nouveau prototype, appelé YouMaché, a été implémenté pour supporter le partage des tâches et la sélection de services en utilisant les forums de tâches et les mémoires de tâches. L'implémentation ce prototype s'appuie sur les données des services et des agents logicielsThe typical activities of a Web service composition framework include : defining the structure of a composite Web service; selecting actual Web services to bind to a composite Web service; orchestrating a composite Web service execution; and handling runtime exceptions. A large body of research exists in the general area of Web services discovery, selection and composition. Generally, these approaches rely on the description of matching techniques (e.g., whether descriptions of services and requests are compatible). Descriptions refer to meta-data such as service capabilities and non-functional properties (e.g., quality of service properties). In this dissertation, we present an approach complementary to all classical Web service selection approaches. We first propose to add on top of Web service composition frameworks the knowledge on how the activity of Web service selection is carried out in the past. Then, to use this knowledge to improve the effectiveness of this activity. For this purpose, we introduce the concept of task memory. A task memory represents knowledge about the selected service configurations and the context in which these configurations are considered most appropriate by users. By leveraging the emerging wave of innovations in Web 2.0 that promotes a new paradigm in which both providers and end-users (including non-expert users) can easily and freely share information and services over the Web, we introduce the concept of task forum. Essentially, a task forum provides a means to collect and share domain specific task definitions and task memories among users. This allows users to reuse and customize shared definitions instead of developing definitions of tasks from scratch. In addition, a task forum uses a publish and subscribe interaction model to support massive service selection recommendations. The techniques presented in this dissertation are implemented by reusing existing service composition techniques to support the definition and execution of composite services. A new prototype called YouMach e is implemented to support tasks sharing and service selection using tasks forums and task memories. The implementation of this prototype relies on data services and software agentsLYON1-BU.Sciences (692662101) / SudocSudocFranceF

    An Immune System-Inspired Approach for Composite Web Services Reuse

    No full text
    National audienceRecently, several web services composition solutions have been proposed. However, few existing solutions address the issue of service composition reuse and specialization, i.e, how applications can be built upon existing simple or composite web services by reuse, restriction, or extension. In this paper, we introduce the concept of abstract composite web service, that can be specialized to particular concrete compositions, and that can be reused in the construction of larger or extended compositions. We propose an approach inspired by immune systems which combines structural and usage information in order to find and reify stable web services composites by an affinity maturation process.RĂ©cemment, plusieurs solutions ont Ă©tĂ© proposĂ©es pour composer des services web. Cependant, peu de solutions connues abordent la question de la rĂ©utilisation et de la spĂ©cia-lisation d’une composition de service donnĂ©e, c.-Ă -d., comment des applications peuvent ĂŞtre rĂ©alisĂ©es par rĂ©utilisation, restriction, ou extension de services simples ou composites exis-tants. Dans cet article, nous introduisons la notion de services web composites abstraits, pouvant ĂŞtre Ă  la fois spĂ©cialisĂ©s et instanciĂ©s Ă  des compositions concrètes particulières, et rĂ©utilisĂ©s dans la construction de compositions plus complexes ou Ă©tendues. Nous proposons une approche inspirĂ©e des systèmes immunitaires, dans laquelle nous combinons des infor-mations de structure et des informations d’usage, afin de faire Ă©merger des services web composites stables par maturation d’affinitĂ©
    • …
    corecore