1,273 research outputs found

    Hytexpros : a hypermedia information retrieval system

    Get PDF
    The Hypermedia information retrieval system makes use of the specific capabilities of hypermedia systems with information retrieval operations and provides new kind of information management tools. It combines both hypermedia and information retrieval to offer end-users the possibility of navigating, browsing and searching a large collection of documents to satisfy an information need. TEXPROS is an intelligent document processing and retrieval system that supports storing, extracting, classifying, categorizing, retrieval and browsing enterprise information. TEXPROS is a perfect application to apply hypermedia information retrieval techniques. In this dissertation, we extend TEXPROS to a hypermedia information retrieval system called HyTEXPROS with hypertext functionalities, such as node, typed and weighted links, anchors, guided-tours, network overview, bookmarks, annotations and comments, and external linkbase. It describes the whole information base including the metadata and the original documents as network nodes connected by links. Through hypertext functionalities, a user can construct dynamically an information path by browsing through pieces of the information base. By adding hypertext functionalities to TEXPROS, HyTEXPROS is created. It changes its working domain from a personal document process domain to a personal library domain accompanied with citation techniques to process original documents. A four-level conceptual architecture is presented as the system architecture of HyTEXPROS. Such architecture is also referred to as the reference model of HyTEXPROS. Detailed description of HyTEXPROS, using the First Order Logic Calculus, is also proposed. An early version of a prototype is briefly described

    Processing Structured Hypermedia : A Matter of Style

    Get PDF
    With the introduction of the World Wide Web in the early nineties, hypermedia has become the uniform interface to the wide variety of information sources available over the Internet. The full potential of the Web, however, can only be realized by building on the strengths of its underlying research fields. This book describes the areas of hypertext, multimedia, electronic publishing and the World Wide Web and points out fundamental similarities and differences in approaches towards the processing of information. It gives an overview of the dominant models and tools developed in these fields and describes the key interrelationships and mutual incompatibilities. In addition to a formal specification of a selection of these models, the book discusses the impact of the models described on the software architectures that have been developed for processing hypermedia documents. Two example hypermedia architectures are described in more detail: the DejaVu object-oriented hypermedia framework, developed at the VU, and CWI's Berlage environment for time-based hypermedia document transformations

    Social shaping of digital publishing: exploring the interplay between culture and technology

    Get PDF
    The processes and forms of electronic publishing have been changing since the advent of the Web. In recent years, the open access movement has been a major driver of scholarly communication, and change is also evident in other fields such as e-government and e-learning. Whilst many changes are driven by technological advances, an altered social reality is also pushing the boundaries of digital publishing. With 23 articles and 10 posters, Elpub 2012 focuses on the social shaping of digital publishing and explores the interplay between culture and technology. This book contains the proceedings of the conference, consisting of 11 accepted full articles and 12 articles accepted as extended abstracts. The articles are presented in groups, and cover the topics: digital scholarship and publishing; special archives; libraries and repositories; digital texts and readings; and future solutions and innovations. Offering an overview of the current situation and exploring the trends of the future, this book will be of interest to all those whose work involves digital publishing

    AH 2003 : workshop on adaptive hypermedia and adaptive web-based systems

    Get PDF

    AH 2003 : workshop on adaptive hypermedia and adaptive web-based systems

    Get PDF

    Hypertext Semiotics in the Commercialized Internet

    Get PDF
    Die Hypertext Theorie verwendet die selbe Terminologie, welche seit Jahrzehnten in der semiotischen Forschung untersucht wird, wie z.B. Zeichen, Text, Kommunikation, Code, Metapher, Paradigma, Syntax, usw. Aufbauend auf jenen Ergebnissen, welche in der Anwendung semiotischer Prinzipien und Methoden auf die Informatik erfolgreich waren, wie etwa Computer Semiotics, Computational Semiotics und Semiotic Interface Engineering, legt diese Dissertation einen systematischen Ansatz für all jene Forscher dar, die bereit sind, Hypertext aus einer semiotischen Perspektive zu betrachten. Durch die Verknüpfung existierender Hypertext-Modelle mit den Resultaten aus der Semiotik auf allen Sinnesebenen der textuellen, auditiven, visuellen, taktilen und geruchlichen Wahrnehmung skizziert der Autor Prolegomena einer Hypertext-Semiotik-Theorie, anstatt ein völlig neues Hypertext-Modell zu präsentieren. Eine Einführung in die Geschichte der Hypertexte, von ihrer Vorgeschichte bis zum heutigen Entwicklungsstand und den gegenwärtigen Entwicklungen im kommerzialisierten World Wide Web bilden den Rahmen für diesen Ansatz, welcher als Fundierung des Brückenschlages zwischen Mediensemiotik und Computer-Semiotik angesehen werden darf. Während Computer-Semiotiker wissen, dass der Computer eine semiotische Maschine ist und Experten der künstlichen Intelligenz-Forschung die Rolle der Semiotik in der Entwicklung der nächsten Hypertext-Generation betonen, bedient sich diese Arbeit einer breiteren methodologischen Basis. Dementsprechend reichen die Teilgebiete von Hypertextanwendungen, -paradigmen, und -strukturen, über Navigation, Web Design und Web Augmentation zu einem interdisziplinären Spektrum detaillierter Analysen, z.B. des Zeigeinstrumentes der Web Browser, des Klammeraffen-Zeichens und der sogenannten Emoticons. Die Bezeichnung ''Icon'' wird als unpassender Name für jene Bildchen, welche von der graphischen Benutzeroberfläche her bekannt sind und in Hypertexten eingesetzt werden, zurückgewiesen und diese Bildchen durch eine neue Generation mächtiger Graphic Link Markers ersetzt. Diese Ergebnisse werden im Kontext der Kommerzialisierung des Internet betrachtet. Neben der Identifizierung der Hauptprobleme des eCommerce aus der Perspektive der Hypertext Semiotik, widmet sich der Autor den Informationsgütern und den derzeitigen Hindernissen für die New Economy, wie etwa der restriktiven Gesetzeslage in Sachen Copyright und Intellectual Property. Diese anachronistischen Beschränkungen basieren auf der problematischen Annahme, dass auch der Informationswert durch die Knappheit bestimmt wird. Eine semiotische Analyse der iMarketing Techniken, wie z.B. Banner Werbung, Keywords und Link Injektion, sowie Exkurse über den Browser Krieg und den Toywar runden die Dissertation ab

    On the Design of a Dual-Mode User Interface for Accessing 3D Content on the World Wide Web

    Get PDF
    International audienceThe World Wide Web, today's largest and most important online information infrastructure, does not support 3D content and, although various approaches have been proposed, there is still no clear design methodology for user interfaces that tightly integrate hypertext and interactive 3D graphics. This paper presents a novel strategy for accessing information spaces, where hypertext and 3D graphics data are simultaneously available and interlinked. We introduce a Dual-Mode User Interface that has two modes between which a user can switch anytime: the driven by simple hypertext-based interactions "hypertext mode", where a 3D scene is embedded in hypertext and the more immersive "3D mode", which immerses the hypertextual annotations into the 3D scene. A user study is presented, which characterizes the interface in terms of its efficiency and usability

    Modelling human teaching tactics and strategies for tutoring systems

    Get PDF
    One of the promises of ITSs and ILEs is that they will teach and assist learning in an intelligent manner. Historically this has tended to mean concentrating on the interface, on the representation of the domain and on the representation of the student’s knowledge. So systems have attempted to provide students with reifications both of what is to be learned and of the learning process, as well as optimally sequencing and adjusting activities, problems and feedback to best help them learn that domain. We now have embodied (and disembodied) teaching agents and computer-based peers, and the field demonstrates a much greater interest in metacognition and in collaborative activities and tools to support that collaboration. Nevertheless the issue of the teaching competence of ITSs and ILEs is still important, as well as the more specific question as to whether systems can and should mimic human teachers. Indeed increasing interest in embodied agents has thrown the spotlight back on how such agents should behave with respect to learners. In the mid 1980s Ohlsson and others offered critiques of ITSs and ILEs in terms of the limited range and adaptability of their teaching actions as compared to the wealth of tactics and strategies employed by human expert teachers. So are we in any better position in modelling teaching than we were in the 80s? Are these criticisms still as valid today as they were then? This paper reviews progress in understanding certain aspects of human expert teaching and in developing tutoring systems that implement those human teaching strategies and tactics. It concentrates particularly on how systems have dealt with student answers and how they have dealt with motivational issues, referring particularly to work carried out at Sussex: for example, on responding effectively to the student’s motivational state, on contingent and Vygotskian inspired teaching strategies and on the plausibility problem. This latter is concerned with whether tactics that are effectively applied by human teachers can be as effective when embodied in machine teachers

    Supporting authoring of adaptive hypermedia

    Get PDF
    It is well-known that students benefit from personalised attention. However, frequently teachers are unable to provide this, most often due to time constraints. An Adaptive Hypermedia (AH) system can offer a richer learning experience, by giving personalised attention to students. The authoring process, however, is time consuming and cumbersome. Our research explores the two main aspects to authoring of AH: authoring of content and adaptive behaviour. The research proposes possible solutions, to overcome the hurdles towards acceptance of AH in education. Automation methods can help authors, for example, teachers could create linear lessons and our prototype can add content alternatives for adaptation. Creating adaptive behaviour is more complex. Rule-based systems, XML-based conditional inclusion, Semantic Web reasoning and reusable, portable scripting in a programming language have been proposed. These methods all require specialised knowledge. Hence authoring of adaptive behaviour is difficult and teachers cannot be expected to create such strategies. We investigate three ways to address this issue. 1. Reusability: We investigate limitations regarding adaptation engines, which influence the authoring and reuse of adaptation strategies. We propose a metalanguage, as a supplement to the existing LAG adaptation language, showing how it can overcome such limitations. 2. Standardisation: There are no widely accepted standards for AH. The IMSLearning Design (IMS-LD) specification has similar goals to Adaptive Educational Hypermedia (AEH). Investigation shows that IMS-LD is more limited in terms of adaptive behaviour, but the authoring process focuses more on learning sequences and outcomes. 3. Visualisation: Another way is to simplify the authoring process of strategies using a visual tool. We define a reference model and a tool, the Conceptual Adaptation Model (CAM) and GRAPPLE Authoring Tool (GAT), which allow specification of an adaptive course in a graphical way. A key feature is the separation between content, strategy and adaptive course, which increases reusability compared to approaches that combine all factors in one model

    Evaluating FAIR Digital Object and Linked Data as distributed object systems

    Full text link
    FAIR Digital Object (FDO) is an emerging concept that is highlighted by European Open Science Cloud (EOSC) as a potential candidate for building a ecosystem of machine-actionable research outputs. In this work we systematically evaluate FDO and its implementations as a global distributed object system, by using five different conceptual frameworks that cover interoperability, middleware, FAIR principles, EOSC requirements and FDO guidelines themself. We compare the FDO approach with established Linked Data practices and the existing Web architecture, and provide a brief history of the Semantic Web while discussing why these technologies may have been difficult to adopt for FDO purposes. We conclude with recommendations for both Linked Data and FDO communities to further their adaptation and alignment.Comment: 40 pages, submitted to PeerJ C
    corecore