52,725 research outputs found

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Designing and Implementing a Learning Object Repository: Issues of Complexity, Granularity, and User Sense-Making

    Get PDF
    4th International Conference on Open RepositoriesThis presentation was part of the session : DSpace User Group PresentationsDate: 2009-05-20 03:30 PM – 05:00 PMThe Texas Center for Digital Knowledge at the University of North Texas is designing and implementing a DSpace/Manakin learning object repository (LOR) for the Texas Higher Education Coordinating Board to store and provide access to redesigned undergraduate courses being created through the Board's Texas Course Redesign Project (TCRP). The content for the THECB LOR differs in significant ways from content stored in other well-known and evolving LORs, since the content is in the form of complete or partial courses. While this content can be represented as a single learning object (i.e., a complete course as one learning object), the THECB LOR is making the complete courses available as learning objects and it is providing access to components of the courses' content as discrete learning objects for reuse and repurposing. A number of challenges and issues have emerged in the design, development, and implementation the LOR, and this paper focuses on three key aspects and the solutions we are pursuing: 1) complexity of the course content and granularity; 2) submission of complex objects and metadata; and 3) user interface design to assist users in making sense of this repository and its contents.Texas Higher Education Coordinating Boar

    Sensemaking on the Pragmatic Web: A Hypermedia Discourse Perspective

    Get PDF
    The complexity of the dilemmas we face on an organizational, societal and global scale forces us into sensemaking activity. We need tools for expressing and contesting perspectives flexible enough for real time use in meetings, structured enough to help manage longer term memory, and powerful enough to filter the complexity of extended deliberation and debate on an organizational or global scale. This has been the motivation for a programme of basic and applied action research into Hypermedia Discourse, which draws on research in hypertext, information visualization, argumentation, modelling, and meeting facilitation. This paper proposes that this strand of work shares a key principle behind the Pragmatic Web concept, namely, the need to take seriously diverse perspectives and the processes of meaning negotiation. Moreover, it is argued that the hypermedia discourse tools described instantiate this principle in practical tools which permit end-user control over modelling approaches in the absence of consensus

    Learning Correlations between Linguistic Indicators and Semantic Constraints: Reuse of Context-Dependent Descriptions of Entities

    Get PDF
    This paper presents the results of a study on the semantic constraints imposed on lexical choice by certain contextual indicators. We show how such indicators are computed and how correlations between them and the choice of a noun phrase description of a named entity can be automatically established using supervised learning. Based on this correlation, we have developed a technique for automatic lexical choice of descriptions of entities in text generation. We discuss the underlying relationship between the pragmatics of choosing an appropriate description that serves a specific purpose in the automatically generated text and the semantics of the description itself. We present our work in the framework of the more general concept of reuse of linguistic structures that are automatically extracted from large corpora. We present a formal evaluation of our approach and we conclude with some thoughts on potential applications of our method.Comment: 7 pages, uses colacl.sty and acl.bst, uses epsfig. To appear in the Proceedings of the Joint 17th International Conference on Computational Linguistics 36th Annual Meeting of the Association for Computational Linguistics (COLING-ACL'98

    Computing word-of-mouth trust relationships in social networks from Semantic Web and Web 2.0 data sources

    Get PDF
    Social networks can serve as both a rich source of new information and as a filter to identify the information most relevant to our specific needs. In this paper we present a methodology and algorithms that, by exploiting existing Semantic Web and Web2.0 data sources, help individuals identify who in their social network knows what, and who is the most trustworthy source of information on that topic. Our approach improves upon previous work in a number of ways, such as incorporating topic-specific rather than global trust metrics. This is achieved by generating topic experience profiles for each network member, based on data from Revyu and del.icio.us, to indicate who knows what. Identification of the most trustworthy sources is enabled by a rich trust model of information and recommendation seeking in social networks. Reviews and ratings created on Revyu provide source data for algorithms that generate topic expertise and person to person affinity metrics. Combining these metrics, we are implementing a user-oriented application for searching and automated ranking of information sources within social networks

    DIDET: Digital libraries for distributed, innovative design education and teamwork. Final project report

    Get PDF
    The central goal of the DIDET Project was to enhance student learning opportunities by enabling them to partake in global, team based design engineering projects, in which they directly experience different cultural contexts and access a variety of digital information sources via a range of appropriate technology. To achieve this overall project goal, the project delivered on the following objectives: 1. Teach engineering information retrieval, manipulation, and archiving skills to students studying on engineering degree programs. 2. Measure the use of those skills in design projects in all years of an undergraduate degree program. 3. Measure the learning performance in engineering design courses affected by the provision of access to information that would have been otherwise difficult to access. 4. Measure student learning performance in different cultural contexts that influence the use of alternative sources of information and varying forms of Information and Communications Technology. 5. Develop and provide workshops for staff development. 6. Use the measurement results to annually redesign course content and the digital libraries technology. The overall DIDET Project approach was to develop, implement, use and evaluate a testbed to improve the teaching and learning of students partaking in global team based design projects. The use of digital libraries and virtual design studios was used to fundamentally change the way design engineering is taught at the collaborating institutions. This was done by implementing a digital library at the partner institutions to improve learning in the field of Design Engineering and by developing a Global Team Design Project run as part of assessed classes at Strathclyde, Stanford and Olin. Evaluation was carried out on an ongoing basis and fed back into project development, both on the class teaching model and the LauLima system developed at Strathclyde to support teaching and learning. Major findings include the requirement to overcome technological, pedagogical and cultural issues for successful elearning implementations. A need for strong leadership has been identified, particularly to exploit the benefits of cross-discipline team working. One major project output still being developed is a DIDET Project Framework for Distributed Innovative Design, Education and Teamwork to encapsulate all project findings and outputs. The project achieved its goal of embedding major change to the teaching of Design Engineering and Strathclyde's new Global Design class has been both successful and popular with students
    • 

    corecore