789 research outputs found

    Web services and workflow management for biological resources

    Get PDF
    BACKGORUND: The completion of the Human Genome Project has resulted in large quantities of biological data which are proving difficult to manage and integrate effectively. There is a need for a system that is able to automate accesses to remote sites and to "understand" the information that it is managing in order to link data properly. Workflow management systems combined with Web Services are promising Information and Communication Technologies (ICT) tools. Some have already been proposed and are being increasingly applied to the biomedical domain, especially as many biology-related Web Services are now becoming available. Information on biological resources and on genomic sequences mutations are two examples of very specialized datasets that are useful for specific research domains. RESULTS: The architecture of a system that is able to access and execute predefined workflows is presented in this paper. Web Services allowing access to the IARC TP53 Mutation Database and CABRI catalogues of biological resources have been implemented and are available on-line. Example workflows which retrieve data from these Web Services have also been created and are available on-line. CONCLUSION: We present a general architecture and some building blocks for the implementation of a system that is able to remotely execute workflows of biomedical interest and show how this approach can effectively produce useful outputs. The further development and implementation of Web Services allowing access to an exhaustive set of biomedical databases and the creation of effective and useful workflows will improve the automation of in-silico analysis

    A SNP-centric database for the investigation of the human genome

    Get PDF
    BACKGROUND: Single Nucleotide Polymorphisms (SNPs) are an increasingly important tool for genetic and biomedical research. Although current genomic databases contain information on several million SNPs and are growing at a very fast rate, the true value of a SNP in this context is a function of the quality of the annotations that characterize it. Retrieving and analyzing such data for a large number of SNPs often represents a major bottleneck in the design of large-scale association studies. DESCRIPTION: SNPper is a web-based application designed to facilitate the retrieval and use of human SNPs for high-throughput research purposes. It provides a rich local database generated by combining SNP data with the Human Genome sequence and with several other data sources, and offers the user a variety of querying, visualization and data export tools. In this paper we describe the structure and organization of the SNPper database, we review the available data export and visualization options, and we describe how the architecture of SNPper and its specialized data structures support high-volume SNP analysis. CONCLUSIONS: The rich annotation database and the powerful data manipulation and presentation facilities it offers make SNPper a very useful online resource for SNP research. Its success proves the great need for integrated and interoperable resources in the field of computational biology, and shows how such systems may play a critical role in supporting the large-scale computational analysis of our genome

    A web services choreography scenario for interoperating bioinformatics applications

    Get PDF
    BACKGROUND: Very often genome-wide data analysis requires the interoperation of multiple databases and analytic tools. A large number of genome databases and bioinformatics applications are available through the web, but it is difficult to automate interoperation because: 1) the platforms on which the applications run are heterogeneous, 2) their web interface is not machine-friendly, 3) they use a non-standard format for data input and output, 4) they do not exploit standards to define application interface and message exchange, and 5) existing protocols for remote messaging are often not firewall-friendly. To overcome these issues, web services have emerged as a standard XML-based model for message exchange between heterogeneous applications. Web services engines have been developed to manage the configuration and execution of a web services workflow. RESULTS: To demonstrate the benefit of using web services over traditional web interfaces, we compare the two implementations of HAPI, a gene expression analysis utility developed by the University of California San Diego (UCSD) that allows visual characterization of groups or clusters of genes based on the biomedical literature. This utility takes a set of microarray spot IDs as input and outputs a hierarchy of MeSH Keywords that correlates to the input and is grouped by Medical Subject Heading (MeSH) category. While the HTML output is easy for humans to visualize, it is difficult for computer applications to interpret semantically. To facilitate the capability of machine processing, we have created a workflow of three web services that replicates the HAPI functionality. These web services use document-style messages, which means that messages are encoded in an XML-based format. We compared three approaches to the implementation of an XML-based workflow: a hard coded Java application, Collaxa BPEL Server and Taverna Workbench. The Java program functions as a web services engine and interoperates with these web services using a web services choreography language (BPEL4WS). CONCLUSION: While it is relatively straightforward to implement and publish web services, the use of web services choreography engines is still in its infancy. However, industry-wide support and push for web services standards is quickly increasing the chance of success in using web services to unify heterogeneous bioinformatics applications. Due to the immaturity of currently available web services engines, it is still most practical to implement a simple, ad-hoc XML-based workflow by hard coding the workflow as a Java application. For advanced web service users the Collaxa BPEL engine facilitates a configuration and management environment that can fully handle XML-based workflow

    CloVR: A virtual machine for automated and portable sequence analysis from the desktop using cloud computing

    Get PDF
    Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing.https://doi.org/10.1186/1471-2105-12-35

    Analyzing epigenomic data in a large-scale context

    Get PDF
    While large amounts of epigenomic data are publicly available, their retrieval in a form suitable for downstream analysis is a bottleneck in current research. In a typical analysis, users are required to download huge files that span the entire genome, even if they are only interested in a small subset (e.g., promoter regions) or an aggregation thereof. Moreover, complex operations on genome-level data are not always feasible on a local computer due to resource limitations. The DeepBlue Epigenomic Data Server mitigates this issue by providing a robust server that affords a powerful API for searching, filtering, transforming, aggregating, enriching, and downloading data from several epigenomic consortia. Furthermore, its main component implements scalable data storage and Manipulation methods that scale with the increasing amount of epigenetic data, thereby making it the ideal resource for researchers that seek to integrate epigenomic data into their analysis workflow. This work also presents companion tools that utilize the DeepBlue API to enable users not proficient in scripting or programming languages to analyze epigenomic data in a user-friendly way: (i) an R/Bioconductor package that integrates DeepBlue into the R analysis workflow. The extracted data are automatically converted into suitable R data structures for downstream analysis and visualization within the Bioconductor frame- work; (ii) a web portal that enables users to search, select, filter and download the epigenomic data available in the DeepBlue Server. This interface provides elements, such as data tables, grids, data selections, developed for empowering users to find the required epigenomic data in a straightforward interface; (iii) DIVE, a web data analysis tool that allows researchers to perform large-epigenomic data analysis in a programming-free environment. DIVE enables users to compare their datasets to the datasets available in the DeepBlue Server in an intuitive interface, which summarizes the comparison of hundreds of datasets in a simple chart. Furthermore, these tools are integrated, being capable of sharing results among themselves, creating a powerful large-scale epigenomic data analysis environment. The DeepBlue Epigenomic Data Server and its ecosystem was well received by the International Human Epigenome Consortium and already attracted much attention by the epigenomic research community with currently 160 registered users and more than three million anonymous workflow processing requests since its release.Während große Mengen epigenomischer Daten öffentlich verfügbar sind, ist ihre Abfrage in einer für die Downstream-Analyse geeigneten Form ein Engpass in der aktuellen Forschung. Bei einer typischen Analyse müssen Benutzer riesige Dateien herunterladen, die das gesamte Genom umfassen, selbst wenn sie nur an einer kleinen Teilmenge (z.B., Promotorregionen) oder einer Aggregation davon interessiert sind. Darüber hinaus sind komplexe Vorgänge mit Daten auf Genomebene aufgrund von Ressourceneinschränkungen auf einem lokalen Computer nicht immer möglich. Der DeepBlue Epigenomic Data Server behebt dieses Problem, indem er eine leistungsstarke API zum Suchen, Filtern, Umwandeln, Aggregieren, Anreichern und Herunterladen von Daten verschiedener epigenomischer Konsortien bietet. Darüber hinaus implementiert der DeepBlue-Server skalierbare Datenspeicherungs- und manipulationsmethoden, die der zunehmenden Menge epigenetischer Daten gerecht werden. Dadurch ist der DeepBlue Server ideal für Forscher geeignet, die die aktuellen epigenomischen Ressourcen in ihren Analyse-Workflow integrieren möchten. In dieser Arbeit werden zusätzlich Begleittools vorgestellt, die die DeepBlue-API verwenden, um Benutzern, die sich mit Scripting oder Programmiersprachen nicht auskennen, die Möglichkeit zu geben, epigenomische Daten auf benutzerfreundliche Weise zu analysieren: (i) ein R/ Bioconductor-Paket, das DeepBlue in den R-Analyse-Workflow integriert. Die extrahierten Daten werden automatisch in geeignete R-Datenstrukturen für die Downstream-Analyse und Visualisierung innerhalb des Bioconductor-Frameworks konvertiert; (ii) ein Webportal, über das Benutzer die auf dem DeepBlue Server verfügbaren epigenomischen Daten suchen, auswählen, filtern und herunterladen können. Diese Schnittstelle bietet Elemente wie Datentabellen, Raster, Datenselektionen, mit denen Benutzer die erforderlichen epigenomischen Daten in einer einfachen Schnittstelle finden können; (iii) DIVE, ein Webdatenanalysetool, mit dem Forscher umfangreiche epigenomische Datenanalysen in einer programmierungsfreien Umgebung durchführen können. Mit DIVE können Benutzer ihre Datensätze mit den im Deep- Blue Server verfügbaren Datensätzen in einer intuitiven Benutzeroberfläche vergleichen. Dabei kann der Vergleich hunderter Datensätze in einem Diagramm ausgedrückt werden. Aufgrund der großen Datenmenge, die in DIVE verfügbar ist, werden Methoden bereitgestellt, mit denen die ähnlichsten Datensätze für eine vergleichende Analyse vorgeschlagen werden können. Alle zuvor genannten Tools sind miteinander integriert, so dass sie die Ergebnisse untereinander austauschen können, wodurch eine leistungsstarke Umgebung für die Analyse epigenomischer Daten entsteht. Der DeepBlue Epigenomic Data Server und sein Ökosystem wurden vom International Human Epigenome Consortium äußerst gut aufgenommen und erreichten seit ihrer Veröffentlichung große Aufmerksamkeit bei der epigenomischen Forschungsgemeinschaft mit derzeit 160 registrierten Benutzern und mehr als drei Millionen anonymen Verarbeitungsanforderungen

    Bioconductor: open software development for computational biology and bioinformatics.

    Get PDF
    The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. The goals of the project include: fostering collaborative development and widespread use of innovative software, reducing barriers to entry into interdisciplinary scientific research, and promoting the achievement of remote reproducibility of research results. We describe details of our aims and methods, identify current challenges, compare Bioconductor to other open bioinformatics projects, and provide working examples

    The caCORE Software Development Kit: Streamlining construction of interoperable biomedical information services

    Get PDF
    BACKGROUND: Robust, programmatically accessible biomedical information services that syntactically and semantically interoperate with other resources are challenging to construct. Such systems require the adoption of common information models, data representations and terminology standards as well as documented application programming interfaces (APIs). The National Cancer Institute (NCI) developed the cancer common ontologic representation environment (caCORE) to provide the infrastructure necessary to achieve interoperability across the systems it develops or sponsors. The caCORE Software Development Kit (SDK) was designed to provide developers both within and outside the NCI with the tools needed to construct such interoperable software systems. RESULTS: The caCORE SDK requires a Unified Modeling Language (UML) tool to begin the development workflow with the construction of a domain information model in the form of a UML Class Diagram. Models are annotated with concepts and definitions from a description logic terminology source using the Semantic Connector component. The annotated model is registered in the Cancer Data Standards Repository (caDSR) using the UML Loader component. System software is automatically generated using the Codegen component, which produces middleware that runs on an application server. The caCORE SDK was initially tested and validated using a seven-class UML model, and has been used to generate the caCORE production system, which includes models with dozens of classes. The deployed system supports access through object-oriented APIs with consistent syntax for retrieval of any type of data object across all classes in the original UML model. The caCORE SDK is currently being used by several development teams, including by participants in the cancer biomedical informatics grid (caBIG) program, to create compatible data services. caBIG compatibility standards are based upon caCORE resources, and thus the caCORE SDK has emerged as a key enabling technology for caBIG. CONCLUSION: The caCORE SDK substantially lowers the barrier to implementing systems that are syntactically and semantically interoperable by providing workflow and automation tools that standardize and expedite modeling, development, and deployment. It has gained acceptance among developers in the caBIG program, and is expected to provide a common mechanism for creating data service nodes on the data grid that is under development

    Security strategies in genomic files

    Get PDF
    There are new mechanisms to sequence and process the genomic code, discovering thus diagnostic tools and treatments. The file for a sequenced genome can reach hundreds of gigabytes. Thus, for further studies, we need new means to compress the information and a standardized representation to simplify the development of new tools. The ISO standardization group MPEG has used its expertise in compressing multimedia content to compress genomic information and develop its ´MPEG-G standard’. Given the sensitivity of the data, security is a major identified requirement. This thesis proposes novel technologies that assure the security of both the sequenced data and its metadata. We define a container-based file format to group data, metadata, and security information at the syntactical level. It includes new features like grouping multiple results in a same file to simplify the transport of whole studies. We use the granularity of the encoder’s output to enhance security. The information is represented in units, each dedicated to a specific region of the genome, which allows to provide encryption and signature features on a region base. We analyze the trade-off between security and an even more fine-grained approach and prove that apparently secure settings can be insecure: if the file creator may encrypt only specific elements of a unit, cross-checking unencrypted information permits to infer encrypted content. Most of the proposals for MPEG-G coming from other research groups and companies focused on data compression and representation. However, the need was recognized to find a solution for metadata encoding. Our proposal was included in the standard: an XML-based solution, separated in a core specification and extensions. It permits to adapt the metadata schema to the different genomic repositories' frameworks, without importing requirements from one framework to another. To simplify the handling of the resulting metadata, we define profiles, i.e. lists of extensions that must be present in a given framework. We use XML signature and XML encryption for metadata security. The MPEG requirements also concern access rules. Our privacy solutions limit the range of persons with access and we propose access rules represented with XACML to convey under which circumstances a user is granted access to a specific action among the ones specified in MPEG-G's API, e.g. filtering data by attributes. We also specify algorithms to combine multiple rules by defining default behaviors and exceptions. The standard’s security mechanisms protect the information only during transport and access. Once the data is obtained, the user could publish it. In order to identify leakers, we propose an algorithm that generates unique, virtually undetectable variations. Our solution is novel as the marking can be undone (and the utility of the data preserved) if the corresponding secret key is revealed. We also show how to combine multiple secret keys to avoid collusion. The API retained for MPEG-G considers search criteria not present in the indexing tables, which highlights shortcomings. Based on the proposed MPEG-G API we have developed a solution. It is based on a collaboration framework where the different users' needs and the patient's privacy settings result in a purpose-built file format that optimizes query times and provides privacy and authenticity on the patient-defined genomic regions. The encrypted output units are created and indexed to optimize query times and avoid rarely used indexing fields. Our approach resolves the shortcomings of MPEG-G's indexing strategy. We have submitted our technologies to the MPEG standardization committee. Many have been included in the final standard, via merging with other proposals (e.g. file format), discussion (e.g. security mechanisms), or direct acceptance (e.g. privacy rules).Hi han nous mètodes per la seqüenciació i el processament del codi genòmic, permetent descobrir eines de diagnòstic i tractaments en l’àmbit mèdic. El resultat de la seqüenciació d’un genoma es representa en un fitxer, que pot ocupar centenars de gigabytes. Degut a això, hi ha una necessitat d’una representació estandarditzada on la informació és comprimida. Dins de la ISO, el grup MPEG ha fet servir la seva experiència en compressió de dades multimèdia per comprimir dades genòmiques i desenvolupar l'estàndard MPEG-G, sent la seguretat un dels requeriments principals. L'objectiu de la tesi és garantir aquesta seguretat (encriptant, firmant i definint regles d¿ accés) tan per les dades seqüenciades com per les seves metadades. El primer pas és definir com transportar les dades, metadades i paràmetres de seguretat. Especifiquem un format de fitxer basat en contenidors per tal d'agrupar aquets elements a nivell sintàctic. La nostra solució proposa noves funcionalitats com agrupar múltiples resultats en un mateix fitxer. Pel que fa la seguretat de dades, la nostra proposta utilitza les propietats de la sortida del codificador. Aquesta sortida és estructurada en unitats, cadascuna dedicada a una regió concreta del genoma, permetent una encriptació i firma de dades específica a la unitat. Analitzem el compromís entre seguretat i un enfocament de gra més fi demostrant que configuracions aparentment vàlides poden no ser-ho: si es permet encriptar sols certes sub-unitats d'informació, creuant els continguts no encriptats, podem inferir el contingut encriptat. Quant a metadades, proposem una solució basada en XML separada en una especificació bàsica i en extensions. Podem adaptar l'esquema de metadades als diferents marcs de repositoris genòmics, sense imposar requeriments d’un marc a un altre. Per simplificar l'ús, plantegem la definició de perfils, és a dir, una llista de les extensions que han de ser present per un marc concret. Fem servir firmes XML i encriptació XML per implementar la seguretat de les metadades. Les nostres solucions per la privacitat limiten qui té accés a les dades, però no en limita l’ús. Proposem regles d’accés representades amb XACML per indicar en quines circumstàncies un usuari té dret d'executar una de les accions especificades a l'API de MPEG-G (per exemple, filtrar les dades per atributs). Presentem algoritmes per combinar regles, per tal de poder definir casos per defecte i excepcions. Els mecanismes de seguretat de MPEG-G protegeixen la informació durant el transport i l'accés. Una vegada l’usuari ha accedit a les dades, les podria publicar. Per tal d'identificar qui és l'origen del filtratge de dades, proposem un algoritme que genera modificacions úniques i virtualment no detectables. La nostra solució és pionera, ja que els canvis es poden desfer si el secret corresponent és publicat. Per tant, la utilitat de les dades és mantinguda. Demostrem que combinant varis secrets, podem evitar col·lusions. L'API seleccionada per MPEG-G, considera criteris de cerca que no són presents en les taules d’indexació. Basant-nos en aquesta API, hem desenvolupat una solució. És basada en un marc de col·laboració, on la combinació de les necessitats dels diferents usuaris i els requeriments de privacitat del pacient, es combinen en una representació ad-hoc que optimitza temps d’accessos tot i garantint la privacitat i autenticitat de les dades. La majoria de les nostres propostes s’han inclòs a la versió final de l'estàndard, fusionant-les amb altres proposes (com amb el format del fitxer), demostrant la seva superioritat (com amb els mecanismes de seguretat), i fins i tot sent acceptades directament (com amb les regles de privacitat)

    Security strategies in genomic files

    Get PDF
    There are new mechanisms to sequence and process the genomic code, discovering thus diagnostic tools and treatments. The file for a sequenced genome can reach hundreds of gigabytes. Thus, for further studies, we need new means to compress the information and a standardized representation to simplify the development of new tools. The ISO standardization group MPEG has used its expertise in compressing multimedia content to compress genomic information and develop its ´MPEG-G standard’. Given the sensitivity of the data, security is a major identified requirement. This thesis proposes novel technologies that assure the security of both the sequenced data and its metadata. We define a container-based file format to group data, metadata, and security information at the syntactical level. It includes new features like grouping multiple results in a same file to simplify the transport of whole studies. We use the granularity of the encoder’s output to enhance security. The information is represented in units, each dedicated to a specific region of the genome, which allows to provide encryption and signature features on a region base. We analyze the trade-off between security and an even more fine-grained approach and prove that apparently secure settings can be insecure: if the file creator may encrypt only specific elements of a unit, cross-checking unencrypted information permits to infer encrypted content. Most of the proposals for MPEG-G coming from other research groups and companies focused on data compression and representation. However, the need was recognized to find a solution for metadata encoding. Our proposal was included in the standard: an XML-based solution, separated in a core specification and extensions. It permits to adapt the metadata schema to the different genomic repositories' frameworks, without importing requirements from one framework to another. To simplify the handling of the resulting metadata, we define profiles, i.e. lists of extensions that must be present in a given framework. We use XML signature and XML encryption for metadata security. The MPEG requirements also concern access rules. Our privacy solutions limit the range of persons with access and we propose access rules represented with XACML to convey under which circumstances a user is granted access to a specific action among the ones specified in MPEG-G's API, e.g. filtering data by attributes. We also specify algorithms to combine multiple rules by defining default behaviors and exceptions. The standard’s security mechanisms protect the information only during transport and access. Once the data is obtained, the user could publish it. In order to identify leakers, we propose an algorithm that generates unique, virtually undetectable variations. Our solution is novel as the marking can be undone (and the utility of the data preserved) if the corresponding secret key is revealed. We also show how to combine multiple secret keys to avoid collusion. The API retained for MPEG-G considers search criteria not present in the indexing tables, which highlights shortcomings. Based on the proposed MPEG-G API we have developed a solution. It is based on a collaboration framework where the different users' needs and the patient's privacy settings result in a purpose-built file format that optimizes query times and provides privacy and authenticity on the patient-defined genomic regions. The encrypted output units are created and indexed to optimize query times and avoid rarely used indexing fields. Our approach resolves the shortcomings of MPEG-G's indexing strategy. We have submitted our technologies to the MPEG standardization committee. Many have been included in the final standard, via merging with other proposals (e.g. file format), discussion (e.g. security mechanisms), or direct acceptance (e.g. privacy rules).Hi han nous mètodes per la seqüenciació i el processament del codi genòmic, permetent descobrir eines de diagnòstic i tractaments en l’àmbit mèdic. El resultat de la seqüenciació d’un genoma es representa en un fitxer, que pot ocupar centenars de gigabytes. Degut a això, hi ha una necessitat d’una representació estandarditzada on la informació és comprimida. Dins de la ISO, el grup MPEG ha fet servir la seva experiència en compressió de dades multimèdia per comprimir dades genòmiques i desenvolupar l'estàndard MPEG-G, sent la seguretat un dels requeriments principals. L'objectiu de la tesi és garantir aquesta seguretat (encriptant, firmant i definint regles d¿ accés) tan per les dades seqüenciades com per les seves metadades. El primer pas és definir com transportar les dades, metadades i paràmetres de seguretat. Especifiquem un format de fitxer basat en contenidors per tal d'agrupar aquets elements a nivell sintàctic. La nostra solució proposa noves funcionalitats com agrupar múltiples resultats en un mateix fitxer. Pel que fa la seguretat de dades, la nostra proposta utilitza les propietats de la sortida del codificador. Aquesta sortida és estructurada en unitats, cadascuna dedicada a una regió concreta del genoma, permetent una encriptació i firma de dades específica a la unitat. Analitzem el compromís entre seguretat i un enfocament de gra més fi demostrant que configuracions aparentment vàlides poden no ser-ho: si es permet encriptar sols certes sub-unitats d'informació, creuant els continguts no encriptats, podem inferir el contingut encriptat. Quant a metadades, proposem una solució basada en XML separada en una especificació bàsica i en extensions. Podem adaptar l'esquema de metadades als diferents marcs de repositoris genòmics, sense imposar requeriments d’un marc a un altre. Per simplificar l'ús, plantegem la definició de perfils, és a dir, una llista de les extensions que han de ser present per un marc concret. Fem servir firmes XML i encriptació XML per implementar la seguretat de les metadades. Les nostres solucions per la privacitat limiten qui té accés a les dades, però no en limita l’ús. Proposem regles d’accés representades amb XACML per indicar en quines circumstàncies un usuari té dret d'executar una de les accions especificades a l'API de MPEG-G (per exemple, filtrar les dades per atributs). Presentem algoritmes per combinar regles, per tal de poder definir casos per defecte i excepcions. Els mecanismes de seguretat de MPEG-G protegeixen la informació durant el transport i l'accés. Una vegada l’usuari ha accedit a les dades, les podria publicar. Per tal d'identificar qui és l'origen del filtratge de dades, proposem un algoritme que genera modificacions úniques i virtualment no detectables. La nostra solució és pionera, ja que els canvis es poden desfer si el secret corresponent és publicat. Per tant, la utilitat de les dades és mantinguda. Demostrem que combinant varis secrets, podem evitar col·lusions. L'API seleccionada per MPEG-G, considera criteris de cerca que no són presents en les taules d’indexació. Basant-nos en aquesta API, hem desenvolupat una solució. És basada en un marc de col·laboració, on la combinació de les necessitats dels diferents usuaris i els requeriments de privacitat del pacient, es combinen en una representació ad-hoc que optimitza temps d’accessos tot i garantint la privacitat i autenticitat de les dades. La majoria de les nostres propostes s’han inclòs a la versió final de l'estàndard, fusionant-les amb altres proposes (com amb el format del fitxer), demostrant la seva superioritat (com amb els mecanismes de seguretat), i fins i tot sent acceptades directament (com amb les regles de privacitat)
    corecore