138 research outputs found

    A schema-based P2P network to enable publish-subscribe for multimedia content in open hypermedia systems

    No full text
    Open Hypermedia Systems (OHS) aim to provide efficient dissemination, adaptation and integration of hyperlinked multimedia resources. Content available in Peer-to-Peer (P2P) networks could add significant value to OHS provided that challenges for efficient discovery and prompt delivery of rich and up-to-date content are successfully addressed. This paper proposes an architecture that enables the operation of OHS over a P2P overlay network of OHS servers based on semantic annotation of (a) peer OHS servers and of (b) multimedia resources that can be obtained through the link services of the OHS. The architecture provides efficient resource discovery. Semantic query-based subscriptions over this P2P network can enable access to up-to-date content, while caching at certain peers enables prompt delivery of multimedia content. Advanced query resolution techniques are employed to match different parts of subscription queries (subqueries). These subscriptions can be shared among different interested peers, thus increasing the efficiency of multimedia content dissemination

    Conceptual Linking: Ontology-based Open Hypermedia

    No full text
    This paper describes the attempts of the COHSE project to define and deploy a Conceptual Open Hypermedia Service. Consisting of • an ontological reasoning service which is used to represent a sophisticated conceptual model of document terms and their relationships; • a Web-based open hypermedia link service that can offer a range of different link-providing facilities in a scalable and non-intrusive fashion; and integrated to form a conceptual hypermedia system to enable documents to be linked via metadata describing their contents and hence to improve the consistency and breadth of linking of WWW documents at retrieval time (as readers browse the documents) and authoring time (as authors create the documents)

    Conceptual Linking: Ontology-based Open Hypermedia

    No full text
    This paper describes the attempts of the COHSE project to define and deploy a Conceptual Open Hypermedia Service. Consisting of • an ontological reasoning service which is used to represent a sophisticated conceptual model of document terms and their relationships; • a Web-based open hypermedia link service that can offer a range of different link-providing facilities in a scalable and non-intrusive fashion; and integrated to form a conceptual hypermedia system to enable documents to be linked via metadata describing their contents and hence to improve the consistency and breadth of linking of WWW documents at retrieval time (as readers browse the documents) and authoring time (as authors create the documents)

    Generating adaptive hypertext content from the semantic web

    Get PDF
    Accessing and extracting knowledge from online documents is crucial for therealisation of the Semantic Web and the provision of advanced knowledge services. The Artequakt project is an ongoing investigation tackling these issues to facilitate the creation of tailored biographies from information harvested from the web. In this paper we will present the methods we currently use to model, consolidate and store knowledge extracted from the web so that it can be re-purposed as adaptive content. We look at how Semantic Web technology could be used within this process and also how such techniques might be used to provide content to be published via the Semantic Web

    Just-in-time hypermedia

    Get PDF
    Many analytical applications, especially legacy systems, create documents and display screens in response to user queries dynamically or in real time . These documents and displays do not exist in advance, and thus hypermedia must be generated \u27just in time -automatically and dynamically. This dissertation details the idea of \u27just-in-time hypermedia and discusses challenges encountered in this research area. A fully detailed literature review about the research issues and related research work is given. A framework for the \u27just-in-time hypermedia compares virtual documents with static documents, as well as dynamic with static hypermedia functionality. Conceptual \u27just-in-time hypermedia architecture is proposed in terms of requirements and logical components. The \u27just-in-time hypermedia engine is described in terms of architecture, functional components, information flow, and implementation details. Then test results are described and evaluated. Lastly, contributions, limitations, and future work are discussed

    Artequakt: Generating tailored biographies from automatically annotated fragments from the web

    Get PDF
    The Artequakt project seeks to automatically generate narrativebiographies of artists from knowledge that has been extracted from the Web and maintained in a knowledge base. An overview of the system architecture is presented here and the three key components of that architecture are explained in detail, namely knowledge extraction, information management and biography construction. Conclusions are drawn from the initial experiences of the project and future progress is detailed

    Augmenting applications with hyper media, functionality and meta-information

    Get PDF
    The Dynamic Hypermedia Engine (DHE) enhances analytical applications by adding relationships, semantics and other metadata to the application\u27s output and user interface. DHE also provides additional hypermedia navigational, structural and annotation functionality. These features allow application developers and users to add guided tours, personal links and sharable annotations, among other features, into applications. DHE runs as a middleware between the application user interface and its business logic and processes, in a n-tier architecture, supporting the extra functionalities without altering the original systems by means of application wrappers. DHE automatically generates links at run-time for each of those elements having relationships and metadata. Such elements are previously identified using a Relation Navigation Analysis. DHE also constructs more sophisticated navigation techniques not often found on the Web on top of these links. The metadata, links, navigation and annotation features supplement the application\u27s primary functionality. This research identifies element types, or classes , in the application displays. A mapping rule encodes each relationship found between two elements of interest at the class level . When the user selects a particular element, DHE instantiates the commands included in the rules with the actual instance selected and sends them to the appropriate destination system, which then dynamically generates the resulting virtual (i.e. not previously stored) page. DHE executes concurrently with these applications, providing automated link generation and other hypermedia functionality. DHE uses the extensible Markup Language (XMQ -and related World Wide Web Consortium (W3C) sets of XML recommendations, like Xlink, XML Schema, and RDF -to encode the semantic information required for the operation of the extra hypermedia features, and for the transmission of messages between the engine modules and applications. DHE is the only approach we know that provides automated linking and metadata services in a generic manner, based on the application semantics, without altering the applications. DHE will also work with non-Web systems. The results of this work could also be extended to other research areas, such as link ranking and filtering, automatic link generation as the result of a search query, metadata collection and support, virtual document management, hypermedia functionality on the Web, adaptive and collaborative hypermedia, web engineering, and the semantic Web

    Processing Structured Hypermedia : A Matter of Style

    Get PDF
    With the introduction of the World Wide Web in the early nineties, hypermedia has become the uniform interface to the wide variety of information sources available over the Internet. The full potential of the Web, however, can only be realized by building on the strengths of its underlying research fields. This book describes the areas of hypertext, multimedia, electronic publishing and the World Wide Web and points out fundamental similarities and differences in approaches towards the processing of information. It gives an overview of the dominant models and tools developed in these fields and describes the key interrelationships and mutual incompatibilities. In addition to a formal specification of a selection of these models, the book discusses the impact of the models described on the software architectures that have been developed for processing hypermedia documents. Two example hypermedia architectures are described in more detail: the DejaVu object-oriented hypermedia framework, developed at the VU, and CWI's Berlage environment for time-based hypermedia document transformations

    The aristotle approach to open hypermedia

    Get PDF
    Large-scale distributed hypermedia systems comprise a generation of powerful tools to meet the demands of the new information globalization era. The most promising of such systems have characteristics that allow for the easy adaptation both to an, actually, unpredictable technological evolution and to the constantly evolving information needs of users. Such systems are generally known as Open Hypermedia Systems (OHS). Recently, research effort has been focused on the formulation of a solid set of OHS standards (i.e., protocols, reference models and architectures) that would stem from a common understanding and thus, direct future implementations.Keywords: Open Hypermedia Systems, Hypermedia Modeling, Distributed Information System

    Hypertext Semiotics in the Commercialized Internet

    Get PDF
    Die Hypertext Theorie verwendet die selbe Terminologie, welche seit Jahrzehnten in der semiotischen Forschung untersucht wird, wie z.B. Zeichen, Text, Kommunikation, Code, Metapher, Paradigma, Syntax, usw. Aufbauend auf jenen Ergebnissen, welche in der Anwendung semiotischer Prinzipien und Methoden auf die Informatik erfolgreich waren, wie etwa Computer Semiotics, Computational Semiotics und Semiotic Interface Engineering, legt diese Dissertation einen systematischen Ansatz fßr all jene Forscher dar, die bereit sind, Hypertext aus einer semiotischen Perspektive zu betrachten. Durch die Verknßpfung existierender Hypertext-Modelle mit den Resultaten aus der Semiotik auf allen Sinnesebenen der textuellen, auditiven, visuellen, taktilen und geruchlichen Wahrnehmung skizziert der Autor Prolegomena einer Hypertext-Semiotik-Theorie, anstatt ein vÜllig neues Hypertext-Modell zu präsentieren. Eine Einfßhrung in die Geschichte der Hypertexte, von ihrer Vorgeschichte bis zum heutigen Entwicklungsstand und den gegenwärtigen Entwicklungen im kommerzialisierten World Wide Web bilden den Rahmen fßr diesen Ansatz, welcher als Fundierung des Brßckenschlages zwischen Mediensemiotik und Computer-Semiotik angesehen werden darf. Während Computer-Semiotiker wissen, dass der Computer eine semiotische Maschine ist und Experten der kßnstlichen Intelligenz-Forschung die Rolle der Semiotik in der Entwicklung der nächsten Hypertext-Generation betonen, bedient sich diese Arbeit einer breiteren methodologischen Basis. Dementsprechend reichen die Teilgebiete von Hypertextanwendungen, -paradigmen, und -strukturen, ßber Navigation, Web Design und Web Augmentation zu einem interdisziplinären Spektrum detaillierter Analysen, z.B. des Zeigeinstrumentes der Web Browser, des Klammeraffen-Zeichens und der sogenannten Emoticons. Die Bezeichnung ''Icon'' wird als unpassender Name fßr jene Bildchen, welche von der graphischen Benutzeroberfläche her bekannt sind und in Hypertexten eingesetzt werden, zurßckgewiesen und diese Bildchen durch eine neue Generation mächtiger Graphic Link Markers ersetzt. Diese Ergebnisse werden im Kontext der Kommerzialisierung des Internet betrachtet. Neben der Identifizierung der Hauptprobleme des eCommerce aus der Perspektive der Hypertext Semiotik, widmet sich der Autor den Informationsgßtern und den derzeitigen Hindernissen fßr die New Economy, wie etwa der restriktiven Gesetzeslage in Sachen Copyright und Intellectual Property. Diese anachronistischen Beschränkungen basieren auf der problematischen Annahme, dass auch der Informationswert durch die Knappheit bestimmt wird. Eine semiotische Analyse der iMarketing Techniken, wie z.B. Banner Werbung, Keywords und Link Injektion, sowie Exkurse ßber den Browser Krieg und den Toywar runden die Dissertation ab
    • …
    corecore