6,055 research outputs found

    Semantic Modeling of Analytic-based Relationships with Direct Qualification

    Full text link
    Successfully modeling state and analytics-based semantic relationships of documents enhances representation, importance, relevancy, provenience, and priority of the document. These attributes are the core elements that form the machine-based knowledge representation for documents. However, modeling document relationships that can change over time can be inelegant, limited, complex or overly burdensome for semantic technologies. In this paper, we present Direct Qualification (DQ), an approach for modeling any semantically referenced document, concept, or named graph with results from associated applied analytics. The proposed approach supplements the traditional subject-object relationships by providing a third leg to the relationship; the qualification of how and why the relationship exists. To illustrate, we show a prototype of an event-based system with a realistic use case for applying DQ to relevancy analytics of PageRank and Hyperlink-Induced Topic Search (HITS).Comment: Proceedings of the 2015 IEEE 9th International Conference on Semantic Computing (IEEE ICSC 2015

    A Comparative Study: Change Detection and Querying Dynamic XML Documents

    Get PDF
    The efficient management of the dynamic XML documents is a complex area of research. The changes and size of the XML documents throughout its lifetime are limitless. Change detection is an important part of version management to identify difference between successive versions of a document. Document content is continuously evolving. Users wanted to be able to query previous versions, query changes in documents, as well as to retrieve a particular document version efficiently. In this paper we provide comprehensive comparative analysis of various control schemes for change detection and querying dynamic XML documents

    Explanation-Based Auditing

    Full text link
    To comply with emerging privacy laws and regulations, it has become common for applications like electronic health records systems (EHRs) to collect access logs, which record each time a user (e.g., a hospital employee) accesses a piece of sensitive data (e.g., a patient record). Using the access log, it is easy to answer simple queries (e.g., Who accessed Alice's medical record?), but this often does not provide enough information. In addition to learning who accessed their medical records, patients will likely want to understand why each access occurred. In this paper, we introduce the problem of generating explanations for individual records in an access log. The problem is motivated by user-centric auditing applications, and it also provides a novel approach to misuse detection. We develop a framework for modeling explanations which is based on a fundamental observation: For certain classes of databases, including EHRs, the reason for most data accesses can be inferred from data stored elsewhere in the database. For example, if Alice has an appointment with Dr. Dave, this information is stored in the database, and it explains why Dr. Dave looked at Alice's record. Large numbers of data accesses can be explained using general forms called explanation templates. Rather than requiring an administrator to manually specify explanation templates, we propose a set of algorithms for automatically discovering frequent templates from the database (i.e., those that explain a large number of accesses). We also propose techniques for inferring collaborative user groups, which can be used to enhance the quality of the discovered explanations. Finally, we have evaluated our proposed techniques using an access log and data from the University of Michigan Health System. Our results demonstrate that in practice we can provide explanations for over 94% of data accesses in the log.Comment: VLDB201

    The future of social is personal: the potential of the personal data store

    No full text
    This chapter argues that technical architectures that facilitate the longitudinal, decentralised and individual-centric personal collection and curation of data will be an important, but partial, response to the pressing problem of the autonomy of the data subject, and the asymmetry of power between the subject and large scale service providers/data consumers. Towards framing the scope and role of such Personal Data Stores (PDSes), the legalistic notion of personal data is examined, and it is argued that a more inclusive, intuitive notion expresses more accurately what individuals require in order to preserve their autonomy in a data-driven world of large aggregators. Six challenges towards realising the PDS vision are set out: the requirement to store data for long periods; the difficulties of managing data for individuals; the need to reconsider the regulatory basis for third-party access to data; the need to comply with international data handling standards; the need to integrate privacy-enhancing technologies; and the need to future-proof data gathering against the evolution of social norms. The open experimental PDS platform INDX is introduced and described, as a means of beginning to address at least some of these six challenges

    Uniformly Integrated Database Approach for Heterogenous Databases

    Get PDF
    The demands of more storage, scalability, commodity of heterogenous data for storing, analyzing and retrieving data are rapidly increasing in today data-centric area such as cloud computing, big data analytics, etc. These demands cannot be solely handled by relational database system (RDBMS) due to its strict relational model for scalability and adaptability. Therefore, NoSQL (Not only SQL) database called non-relational database is recently introduced to extend RDBMS, and now it is widely used in some software developments. As a result, it becomes challenges regarding how to transform relational to non-relational database or how to integrate them to achieve business purposes regarding storage and adaptability. This paper therefore proposes an approach for uniformly integrated database to integrate data separately extracted from individual database schema from relational and NoSQL database systems. We firstly try to map the data elements in terms of their semantic meaning and structures with the help of ontological semantic mapping and metamodeling from the extracted data. We then cover structural, semantical and syntactical diversity of each database schema and produce integrated database results. To prove efficiency and usefulness of our proposed system, we test our developed system with popular datasets in BSON and traditional sql format using MongoDB and MySQL database. According to the results compared with other proficient contemporary approaches, we have achieved significant results in mapping similarity results although running time and retrieval time are competitive with the others
    corecore