821 research outputs found

    All about that - a URI profiling tool for monitoring and preserving linked data

    No full text
    All About That (AAT) is a URI Profiling tool which allows users to monitor and preserve Linked Data in which they are interested. Its design is based upon the principle of adapting ideas from hypermedia link integrity in order to apply them to the Semantic Web. As the Linked Data Web expands it will become increasingly important to maintain links such that the data remains useful and therefore this tool is presented as a step towards providing this maintenance capability

    Stigmergic hyperlink's contributes to web search

    Get PDF
    Stigmergic hyperlinks are hyperlinks with a "heart beat": if used they stay healthy and online; if neglected, they fade, eventually getting replaced. Their life attribute is a relative usage measure that regular hyperlinks do not provide, hence PageRank-like measures have historically been well informed about the structure of webs of documents, but unaware of what users effectively do with the links. This paper elaborates on how to input the users’ perspective into Google’s original, structure centric, PageRank metric. The discussion then bridges to the Deep Web, some search challenges, and how stigmergic hyperlinks could help decentralize the search experience, facilitating user generated search solutions and supporting new related business models.info:eu-repo/semantics/publishedVersio

    A model-based approach to hypermedia design.

    Get PDF
    This paper introduces the MESH approach to hypermedia design, which combines established entity-relationship and object-oriented abstractions with proprietary concepts into a formal hypermedia data model. Uniform layout and link typing specifications can be attributed and inherited in a static node typing hierarchy, whereas both nodes and links can be submitted dynamically to multiple complementary classifications. In addition, the data model's support for a context-based navigation paradigm, as well as a platform-independent implementation framework, are briefly discussed.Data; Model; Specifications; Classification;

    Automatic extraction of knowledge from web documents

    Get PDF
    A large amount of digital information available is written as text documents in the form of web pages, reports, papers, emails, etc. Extracting the knowledge of interest from such documents from multiple sources in a timely fashion is therefore crucial. This paper provides an update on the Artequakt system which uses natural language tools to automatically extract knowledge about artists from multiple documents based on a predefined ontology. The ontology represents the type and form of knowledge to extract. This knowledge is then used to generate tailored biographies. The information extraction process of Artequakt is detailed and evaluated in this paper

    Design spaces for link and structure versioning

    Get PDF

    Ontology-based domain modelling for consistent content change management

    Get PDF
    Ontology-based modelling of multi-formatted software application content is a challenging area in content management. When the number of software content unit is huge and in continuous process of change, content change management is important. The management of content in this context requires targeted access and manipulation methods. We present a novel approach to deal with model-driven content-centric information systems and access to their content. At the core of our approach is an ontology-based semantic annotation technique for diversely formatted content that can improve the accuracy of access and systems evolution. Domain ontologies represent domain-specific concepts and conform to metamodels. Different ontologies - from application domain ontologies to software ontologies - capture and model the different properties and perspectives on a software content unit. Interdependencies between domain ontologies, the artifacts and the content are captured through a trace model. The annotation traces are formalised and a graph-based system is selected for the representation of the annotation traces

    A Study of Information Fragment Association in Information Management and Retrieval Applications

    Get PDF
    As we strive to identify useful information sifting through the vast number of resources available to us, we often find that the desired information is residing in a small section within a larger body of content which does not necessarily contain similar information. This can make this Information Fragment difficult to find. A Web search engine may not provide a good ranking to a page of unrelated content if it contains only a very small yet invaluable piece of relevant information. This means that our processes often fail to bring together related Information Fragments. We can easily conceive of two Information Fragments which according to a scholar bear a strong association with each other, yet contain no common keywords enabling them to be collocated by a keyword search.This dissertation attempts to address this issue by determining the benefits of enhancing information management and retrieval applications by providing users with the capability of establishing and storing associations between Information Fragments. It estimates the extent to which the efficiency and quality of information retrieval can be improved if users are allowed to capture mental associations they form while reading Information Fragments and share these associations with others using a functional registry-based design. In order to test these benefits three subject groups were recruited and assigned tasks involving Information Fragments. The first two tasks compared the performance and usability of a mainstream social bookmarking tool with a tool enhanced with Information Fragment Association capabilities. The tests demonstrated that the use of Information Fragment Association offers significant advantages both in the efficiency of retrieval and user satisfaction. Analysis of the results of the third task demonstrated that a mainstream Web search engine performed poorly in collocating interrelated fragments when a query designed to retrieve the one of these fragments was submitted. The fourth task demonstrated that Information Fragment Association improves the precision and recall of searches performed on Information Fragment datasets.The results of this study indicate that mainstream information management and retrieval applications provide inadequate support for Information Fragment retrieval and that their enhancement with Information Fragment Association capabilities would be beneficial

    A browsing paradigm and application model for improved hypermedia navigation.

    Get PDF
    The advantages of hypermedia systems are often depicted in comparison to the rigid linear structure of a book. In this paper, both a hypermedia application model and a browsing method are presented that combine the best of both worlds; while holding on to the modelling richness and navigational freedom of hypertext, the use of a (partially) linear browsing strategy like in books greatly helps to reduce user disorientation. First we situate hypermedia within the general context of data storage and retrieval systems. We then address shortcomings of currant hypermedia applications and suggest how imposing a linear path upon the data results into a new navigational paradigm that improves orientation and ease of navigation in a hypermedia environment. After that, we deploy a hyperbase model that supports this new view of browsing and describe the framework for an accessory application. As a conclusion, n overview is provided of the advantages this methodology offers, both for the application developer and for the end user.Advantages; Systems; Structure; Model; Strategy; Data; Applications;

    Evolving database systems : a persistent view

    Get PDF
    Submitted to POS7 This work was supported in St Andrews by EPSRC Grant GR/J67611 "Delivering the Benefits of Persistence"Orthogonal persistence ensures that information will exist for as long as it is useful, for which it must have the ability to evolve with the growing needs of the application systems that use it. This may involve evolution of the data, meta-data, programs and applications, as well as the users' perception of what the information models. The need for evolution has been well recognised in the traditional (data processing) database community and the cost of failing to evolve can be gauged by the resources being invested in interfacing with legacy systems. Zdonik has identified new classes of application, such as scientific, financial and hypermedia, that require new approaches to evolution. These applications are characterised by their need to store large amounts of data whose structure must evolve as it is discovered by the applications that use it. This requires that the data be mapped dynamically to an evolving schema. Here, we discuss the problems of evolution in these new classes of application within an orthogonally persistent environment and outline some approaches to these problems.Postprin
    • 

    corecore