6,332 research outputs found

    ADDRESSING PUBLISHING ISSUES WITH HYPERMEDIA DISTRIBUTED ON THE WEB

    Get PDF
    The content and structure of an electronically published document can be authored and processed in ways that allow for flexibility in presentation on different environments for different users. This enables authors to craft documents that are more widely presentable. Electronic publishing issues that arise from this separation of document storage from presentation include (1) respecting the intent and restrictions of the author and publisher in the document’s presentation, and (2) applying costs to individual document components and allowing the user to choose among alternatives to control the price of the document’s presentation. These costs apply not only to the individual media components displayed but also to the structure created by document authors to bring these media components together as multimedia. A collection of ISO standards, primarily SGML, HyTime and DSSSL, facilitate the representation of presentation-independent documents and the creation of environments that process them for presentation. SMIL is a W3C format under development for hypermedia documents distributed on the World Wide Web. Since SMIL is SGML-compliant, it can easily be incorporated into SGML/HyTime and DSSSL environments. This paper discusses how to address these issues in the context of presentation-independent hypermedia storage. It introduces the Berlage environment, which uses SGML, HyTime, DSSSL and SMIL to store, process, and present hypermedia data. This paper also describes how the Berlage environment can be used to enforce publisher restrictions on media content and to allow users to control the pricing of document presentations. Also explored is the ability of both SMIL and HyTime to address these issues in general, enabling SMIL and HyTime systems to consistently process documents of different document models authored in different environments

    An Open Framework for Integrating Widely Distributed Hypermedia Resources

    No full text
    The success of the WWW has served as an illustration of how hypermedia functionality can enhance access to large amounts of distributed information. However, the WWW and many other distributed hypermedia systems offer very simple forms of hypermedia functionality which are not easily applied to existing applications and data formats, and cannot easily incorporate alternative functions which would aid hypermedia navigation to and from existing documents that have not been developed with hypermedia access in mind. This paper describes the extension to a distributed environment of the open hypermedia functionality of the Microcosm system, which is designed to support the provision of hypermedia access to a wide range of source material and application, and to offer straightforward extension of the system to incorporate new forms of information access

    Unifying Distributed Processing and Open Hypertext through a Heterogeneous Communication Model

    No full text
    A successful distributed open hypermedia system can be characterised by a scaleable architecture which is inherently distributed. While the architects of distributed hypermedia systems have addressed the issues of providing and retrieving distributed resources, they have often neglected to design systems with the inherent capability to exploit the distributed processing of this information. The research presented in this paper describes the construction and use of an open hypermedia system concerned equally with both of these facets

    A BASILar Approach for Building Web APIs on top of SPARQL Endpoints

    Get PDF
    The heterogeneity of methods and technologies to publish open data is still an issue to develop distributed systems on the Web. On the one hand, Web APIs, the most popular approach to offer data services, implement REST principles, which focus on addressing loose coupling and interoperability issues. On the other hand, Linked Data, available through SPARQL endpoints, focus on data integration between distributed data sources. The paper proposes BASIL, an approach to build Web APIs on top of SPARQL endpoints, in order to benefit of the advantages from both Web APIs and Linked Data approaches. Compared to similar solution, BASIL aims on minimising the learning curve for users to promote its adoption. The main feature of BASIL is a simple API that does not introduce new specifications, formalisms and technologies for users that belong to both Web APIs and Linked Data communities

    Collaboration in the Semantic Grid: a Basis for e-Learning

    Get PDF
    The CoAKTinG project aims to advance the state of the art in collaborative mediated spaces for the Semantic Grid. This paper presents an overview of the hypertext and knowledge based tools which have been deployed to augment existing collaborative environments, and the ontology which is used to exchange structure, promote enhanced process tracking, and aid navigation of resources before, after, and while a collaboration occurs. While the primary focus of the project has been supporting e-Science, this paper also explores the similarities and application of CoAKTinG technologies as part of a human-centred design approach to e-Learning

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Reviews

    Get PDF
    Teaching and Learning Materials and the Internet by Ian Forsyth, London: Kogan Page, 1996. ISBN: 0–7494‐ 20596. 181 pages, paperback. £18.99

    Factors shaping the evolution of electronic documentation systems

    Get PDF
    The main goal is to prepare the space station technical and managerial structure for likely changes in the creation, capture, transfer, and utilization of knowledge. By anticipating advances, the design of Space Station Project (SSP) information systems can be tailored to facilitate a progression of increasingly sophisticated strategies as the space station evolves. Future generations of advanced information systems will use increases in power to deliver environmentally meaningful, contextually targeted, interconnected data (knowledge). The concept of a Knowledge Base Management System is emerging when the problem is focused on how information systems can perform such a conversion of raw data. Such a system would include traditional management functions for large space databases. Added artificial intelligence features might encompass co-existing knowledge representation schemes; effective control structures for deductive, plausible, and inductive reasoning; means for knowledge acquisition, refinement, and validation; explanation facilities; and dynamic human intervention. The major areas covered include: alternative knowledge representation approaches; advanced user interface capabilities; computer-supported cooperative work; the evolution of information system hardware; standardization, compatibility, and connectivity; and organizational impacts of information intensive environments
    • 

    corecore