55 research outputs found

    A semantic data federation engine : design, implementation & applications in educational information management

    Get PDF
    Thesis (S.M. in Technology and Policy)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 87-90).With the advent of the World Wide Web, the amount of digital information in the world has increased exponentially. The ability to organize this deluge of data, retrieve it, and combine it with other data would bring numerous benefits to organizations that rely on the analysis of this data for their operations. The Semantic Web encompasses various technologies that support better information organization and access. This thesis proposes a data federation engine that facilitates integration of data across distributed Semantic Web data sources while maintaining appropriate access policies. After discussing existing literature in the field, the design and implementation of the system including its capabilities and limitations are thoroughly described. Moreover, a possible application of the system at the Massachusetts Department of Education is explored in detail, including an investigation of the technical and nontechnical challenges associated with its adoption at a government agency. By using the federation engine, users would be able to exploit the expressivity of the Semantic Web by querying for disparate data at a single location without having to know how it is distributed or where it is stored. Among this research's contributions to the fledgling Semantic Web are: an integrated system for executing SPARQL queries; and, an optimizer that faciliates efficient querying by exploiting statistical information about the data sources.by Mathew Sam Cherian.S.M.S.M.in Technology and Polic

    Dynamic Privacy Management In Services Based Interactions

    Get PDF
    Technology advancements have enabled the distribution and sharing of users personal data over several data sources. Each data source is potentially managed by a different organization, which may expose its data as a Web service. Using such Web services, dynamic composition of atomic data items coupled with the context in which the data is accessed may breach sensitive data that may not comply with the users preference at the time of data collection. Thus, providing uniform access policies to such data can lead to privacy problems. Some fairly recent research has focused on providing solutions for dynamic privacy management. This thesis advances these techniques, and fills some gaps in the existing works. In particular, dynamically incorporating user access context into the privacy policy decision, and its enforcement

    Challenges and Opportunities in Applying Semantics to Improve Access Control in the Field of Internet of Things

    Get PDF
    The increased number of IoT devices results in continuously generated massive amounts of raw data. Parts of this data are private and highly sensitive as they reflect owner’s behavior, obligations, habits, and preferences. In this paper, we point out that flexible and comprehensive access control policies are “a must” in the IoT domain. The Semantic Web technologies can address many of the challenges that the IoT access control is facing with today. Therefore, we analyze the current state of the art in this area and identify the challenges and opportunities for improved access control in a semantically enriched IoT environment. Applying semantics to IoT access control opens a lot of opportunities, such as semantic inference and reasoning, easy data sharing, data trading, new approaches to authentication, security policies based on a natural language and enhances the interoperability using a common ontology

    ARL White Paper on Wikidata: Opportunities and Recommendations

    Get PDF
    In this Association of Research Libraries white paper, a task force of expert Wikidata users recommend a variety of ways for librarians to use the open knowledge base in advancing global discovery of their collections, faculty, and institutions. Beyond the task force, many library professionals from within and outside the Wikimedia community contributed to the white paper in draft form, offering a productive mix of enthusiasm and skepticism that improved the final product. ARL convened the task force and wrote this white paper to inform its membership about GLAM (galleries, libraries, archives, and museums) activity in Wikidata and to highlight opportunities for research library involvement, particularly in community-based collections, community-owned infrastructure, and collective collections

    ARL White Paper on Wikidata: Opportunities and Recommendations

    Get PDF
    This white paper highlights opportunities for research library involvement in Wikidata, particularly in community-based collections, community-owned infrastructure, and collective collections

    A semantic web service-based framework for generic personalization and user modeling

    Get PDF
    [no abstract

    Inspecting Java Program States with Semantic Web Technologies

    Get PDF
    Semantic debugging, as introduced by Kamburjan et al., refers to the practice of applying technologies of the semantic web to query the run-time state of a program and combine it with external domain knowledge. This master thesis aims to take the first step toward making the benefits of semantic debugging available for real-world application development. For this purpose, we implement a semantic debugging tool for the Java programming language, called the Semantic Java Debugger or sjdb. The sjdb tool provides an interactive, command line-based user interface through which users can (1) run Java programs and suspend their execution at user-defined breakpoints, (2) automatically extract RDF knowledge bases with description logic semantics that describe the current state of the program, (3) optionally supplement the knowledge base with external domain knowledge formalized in OWL, (4) run (semantic) queries on this extended knowledge base, and resolve the query results back to Java objects. As part of this debugging tool, the development of an extraction mechanism for knowledge bases from the states of suspended Java programs is one of the main contributions of this thesis. For this purpose, we also devise an OWL formalization of Java runtime states to structure this extraction process and give meaning to the resulting knowledge base. Moreover, case studies are conducted to demonstrate the capabilities of sjdb, but also to identify its limitations, as well as its response times and memory requirements

    Dagstuhl News January - December 2011

    Get PDF
    "Dagstuhl News" is a publication edited especially for the members of the Foundation "Informatikzentrum Schloss Dagstuhl" to thank them for their support. The News give a summary of the scientific work being done in Dagstuhl. Each Dagstuhl Seminar is presented by a small abstract describing the contents and scientific highlights of the seminar as well as the perspectives or challenges of the research topic

    Trust on the semantic web

    Get PDF
    The Semantic Web is a vision to create a “web of knowledge”; an extension of the Web as we know it which will create an information space which will be usable by machines in very rich ways. The technologies which make up the Semantic Web allow machines to reason across information gathered from the Web, presenting only relevant results and inferences to the user. Users of the Web in its current form assess the credibility of the information they gather in a number of different ways. If processing happens without the user being able to check the source and credibility of each piece of information used in the processing, the user must be able to trust that the machine has used trustworthy information at each step of the processing. The machine should therefore be able to automatically assess the credibility of each piece of information it gathers from the Web. A case study on advanced checks for website credibility is presented, and the site presented in the case presented is found to be credible, despite failing many of the checks which are presented. A website with a backend based on RDF technologies is constructed. A better understanding of RDF technologies and good knowledge of the RAP and Redland RDF application frameworks is gained. The second aim of constructing the website was to gather information to be used for testing various trust metrics. The website did not gain widespread support, and therefore not enough data was gathered for this. Techniques for presenting RDF data to users were also developed during website development, and these are discussed. Experiences in gathering RDF data are presented next. A scutter was successfully developed, and the data smushed to create a database where uniquely identifiable objects were linked, even where gathered from different sources. Finally, the use of digital signature as a means of linking an author and content produced by that author is presented. RDF/XML canonicalisation is discussed in the provision of ideal cryptographic checking of RDF graphs, rather than simply checking at the document level. The notion of canonicalisation on the semantic, structural and syntactic levels is proposed. A combination of an existing canonicalisation algorithm and a restricted RDF/XML dialect is presented as a solution to the RDF/XML canonicalisation problem. We conclude that a trusted Semantic Web is possible, with buy in from publishing and consuming parties
    corecore