417 research outputs found

    Introduction to Library Trends 44 (2) Fall 1995: The Library and Undergraduate Education

    Get PDF
    published or submitted for publicatio

    Reasoning & Querying – State of the Art

    Get PDF
    Various query languages for Web and Semantic Web data, both for practical use and as an area of research in the scientific community, have emerged in recent years. At the same time, the broad adoption of the internet where keyword search is used in many applications, e.g. search engines, has familiarized casual users with using keyword queries to retrieve information on the internet. Unlike this easy-to-use querying, traditional query languages require knowledge of the language itself as well as of the data to be queried. Keyword-based query languages for XML and RDF bridge the gap between the two, aiming at enabling simple querying of semi-structured data, which is relevant e.g. in the context of the emerging Semantic Web. This article presents an overview of the field of keyword querying for XML and RDF

    Convention Says ...

    Get PDF

    The Semantic Automated Discovery and Integration (SADI) Web service Design-Pattern, API and Reference Implementation

    Get PDF
    Background. 
The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community.

Description. 
SADI – Semantic Automated Discovery and Integration – is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services “stack”, SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers.

Conclusions.
SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behavior we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner very similar to data housed in static triple-stores, thus facilitating the intersection of Web services and Semantic Web technologies

    Ontologies on the semantic web

    Get PDF
    As an informational technology, the World Wide Web has enjoyed spectacular success. In just ten years it has transformed the way information is produced, stored, and shared in arenas as diverse as shopping, family photo albums, and high-level academic research. The “Semantic Web” was touted by its developers as equally revolutionary but has not yet achieved anything like the Web’s exponential uptake. This 17 000 word survey article explores why this might be so, from a perspective that bridges both philosophy and IT

    A scrutable adaptive hypertext

    Get PDF
    Fuelled by the popularity and uptake of the World Wide Web since the 1990s, many researchers and commercial vendors have focussed on Adaptive Hypermedia Systems as an effective mechanism for disseminating personalised information and services. Such systems store information about the user, such as their goals, interests and background, and use this to provide a personalised response to the user. This technology has been applied to a number of contexts such as education systems, e-commerce applications, information search and retrieval systems. As an increasing number of systems collect and store personal information about their users to provide a personalised service, legislation around the world increasingly requires that users have access to view and modify their personal data. The spirit of such legislation is that the user should be able to understand how personal information about them is used. There literature has reported benefits of allowing users to access and understand data collected about them, particularly in the context of supporting learning through reflection. Although researchers have experimented with open user models, typically the personalisation is inscrutable: the user has little or no visibility in to the adaptation process. When the adaptation produces unexpected results, the user may be left confused with no mechanism for understanding why the system did what it did or how to correct it. This thesis is the next step, giving users the ability to see what has been personalised and why. In the context of personalised hypermedia, this thesis describes the first research to go beyond open, or even scrutable user models; it makes the adaptivity and associated processes open to the user and controllable. The novelty of this work is that a user of an adaptive hypertext system might ask How was this page personalised to me? and is able to see just how their user model affected what they saw in the hypertext document. With an understanding of the personalisation process and the ability to control it, the user is able to steer the personalisation to suit their changing needs, and help improve the accuracy of the user model. Developing an interface to support the scrutinisation of an adaptive hypertext is difficult. Users may not scrutinise often as it is a distraction from their main task. But when users need to scrutinise, perhaps to correct a system misconception, they need to easily find and access the scrutinisation tools. Ideally, the tools should not require any training and users should be able to use them effectively without prior experience or if have not used them for a long time, since this is how users are likely to scrutinise in practice. The contributions of thesis are: (1) SASY/ATML, a domain independent, reusable framework for creation and delivery of scrutable adaptive hypertext; (2)a toolkit of graphical tools that allow the user to scrutinise, or inspect and understand what personalisation occurred and control it; (3) evaluation of the scrutinisation tools and (4) a set of guidelines for providing support for the scrutinisation of an adaptive hypertext through the exploration of several forms of scrutinisation tools
    corecore