38,358 research outputs found

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    A qualitative enquiry into OpenStreetMap making

    Get PDF
    Based on a case study on the OpenStreetMap community, this paper provides a contextual and embodied understanding of the user-led, user-participatory and user-generated produsage phenomenon. It employs Grounded Theory, Social Worlds Theory, and qualitative methods to illuminate and explores the produsage processes of OpenStreetMap making, and how knowledge artefacts such as maps can be collectively and collaboratively produced by a community of people, who are situated in different places around the world but engaged with the same repertoire of mapping practices. The empirical data illustrate that OpenStreetMap itself acts as a boundary object that enables actors from different social worlds to co-produce the Map through interacting with each other and negotiating the meanings of mapping, the mapping data and the Map itself. The discourses also show that unlike traditional maps that black-box cartographic knowledge and offer a single dominant perspective of cities or places, OpenStreetMap is an embodied epistemic object that embraces different world views. The paper also explores how contributors build their identities as an OpenStreetMaper alongside some other identities they have. Understanding the identity-building process helps to understand mapping as an embodied activity with emotional, cognitive and social repertoires

    IDR : a participatory methodology for interdisciplinary design in technology enhanced learning

    Get PDF
    One of the important themes that emerged from the CAL’07 conference was the failure of technology to bring about the expected disruptive effect to learning and teaching. We identify one of the causes as an inherent weakness in prevalent development methodologies. While the problem of designing technology for learning is irreducibly multi-dimensional, design processes often lack true interdisciplinarity. To address this problem we present IDR, a participatory methodology for interdisciplinary techno-pedagogical design, drawing on the design patterns tradition (Alexander, Silverstein & Ishikawa, 1977) and the design research paradigm (DiSessa & Cobb, 2004). We discuss the iterative development and use of our methodology by a pan-European project team of educational researchers, software developers and teachers. We reflect on our experiences of the participatory nature of pattern design and discuss how, as a distributed team, we developed a set of over 120 design patterns, created using our freely available open source web toolkit. Furthermore, we detail how our methodology is applicable to the wider community through a workshop model, which has been run and iteratively refined at five major international conferences, involving over 200 participants

    "Europeanization of the core executive in the transition from circumstances of EU accession to full EU membership"

    Get PDF
    Only recently have the direct and indirect ‘European’ impacts (of political outcomes at the European level) on domestic political systems started to be studied (i.e. Spanou, 1998; Bulmer and Burch, 1998 and 2001; Kassim, Peters and Wright, eds. 2000; Goetz and Hix, eds. 2001; Knill, 2001; Schneider and Aspinwall, eds. 2001; Goetz, ed., 2001; Laffan, 2001b). For the purpose of our paper, we understand Europeanization processes as the impacts of EU integration on specific countries' political institution-building and institutional adjustments including constitutional and administrative law, as well as on how the political system is organized and operated. This paper focuses on one of the three alternative perspectives of the ‘top-down’ approach to studying the processes of Europeanization as defined by Goetz (2001), namely the linkage perspective. Obviously, for recent new EU member states it is the national administrative adjustments for negotiating accession with the EU that have so far prevailed over national administrative adjustments made in the circumstances of (very recent) full EU membership. Our comparative research of three EU accession states/recent new EU member states, in line with a dynamic view, include Estonia, Hungary and Slovenia. While taking some key common features of the selected countries into account, the countries’ idiosyncrasies including variations in the institutional adaptation of their core executives relying on research findings in the framework of the European project ‘Organizing for Enlargement’ are investigated. Preliminary comparative research findings and tentative conclusions on variables that may cause variations in the adaptation of national administrations to the European integration challenges in the three (otherwise in some respects) relatively similar countries are presented

    Towards a generic platform for developing CSCL applications using Grid infrastructure

    Get PDF
    The goal of this paper is to explore the possibility of using CSCL component-based software under a Grid infrastructure. The merge of these technologies represents an attractive, but probably quite laborious enterprise if we consider not only the benefits but also the barriers that we have to overcome. This work presents an attempt toward this direction by developing a generic platform of CSCL components and discussing the advantages that we could obtain if we adapted it to the Grid. We then propose a means that could make this adjustment possible due to the high degree of genericity that our library component is endowed with by being based on the generic programming paradigm. Finally, an application of our library is proposed both for validating the adequacy of the platform which it is based on and for indicating the possibilities gained by using it under the Grid.Peer ReviewedPostprint (published version

    Mediating boundaries between knowledge and knowing: ICT and R4D praxis

    Get PDF
    Research for development (R4D) praxis (theory-informed practical action) can be underpinned by the use of Information and Communication Technologies (ICTs) which, it is claimed, provide opportunities for knowledge working and sharing. Such a framing implicitly or explicitly constructs a boundary around knowledge as reified, or commodified – or at least able to be stabilized for a period of time (first order knowledge). In contrast ‘third-generation knowledge’ emphasizes the social nature of learning and knowledge-making; this reframes knowledge as a negotiated social practice, thus constructing a different system boundary. This paper offers critical reflections on the use of a wiki as a data repository and mediating technical platform as part of innovating in R4D praxis. A sustainable social learning process was sought that fostered an emergent community of practice among biophysical and social researchers acting for the first time as R4D co-researchers. Over time the technologically mediated element of the learning system was judged to have failed. This inquiry asks: How can learning system design cultivate learning opportunities and respond to learning challenges in an online environment to support R4D practice? Confining critical reflection to the online learning experience alone ignores the wider context in which knowledge work took place; therefore the institutional setting is also considered

    Technical alignment

    Get PDF
    This essay discusses the importance of the areas of infrastructure and testing to help digital preservation services demonstrate reliability, transparency, and accountability. It encourages practitioners to build a strong culture in which transparency and collaborations between technical frameworks are valued highly. It also argues for devising and applying agreed-upon metrics that will enable the systematic analysis of preservation infrastructure. The essay begins by defining technical infrastructure and testing in the digital preservation context, provides case studies that exemplify both progress and challenges for technical alignment in both areas, and concludes with suggestions for achieving greater degrees of technical alignment going forward

    Inviwo -- A Visualization System with Usage Abstraction Levels

    Full text link
    The complexity of today's visualization applications demands specific visualization systems tailored for the development of these applications. Frequently, such systems utilize levels of abstraction to improve the application development process, for instance by providing a data flow network editor. Unfortunately, these abstractions result in several issues, which need to be circumvented through an abstraction-centered system design. Often, a high level of abstraction hides low level details, which makes it difficult to directly access the underlying computing platform, which would be important to achieve an optimal performance. Therefore, we propose a layer structure developed for modern and sustainable visualization systems allowing developers to interact with all contained abstraction levels. We refer to this interaction capabilities as usage abstraction levels, since we target application developers with various levels of experience. We formulate the requirements for such a system, derive the desired architecture, and present how the concepts have been exemplary realized within the Inviwo visualization system. Furthermore, we address several specific challenges that arise during the realization of such a layered architecture, such as communication between different computing platforms, performance centered encapsulation, as well as layer-independent development by supporting cross layer documentation and debugging capabilities

    Specification and implementation of mapping rule visualization and editing : MapVOWL and the RMLEditor

    Get PDF
    Visual tools are implemented to help users in defining how to generate Linked Data from raw data. This is possible thanks to mapping languages which enable detaching mapping rules from the implementation that executes them. However, no thorough research has been conducted so far on how to visualize such mapping rules, especially if they become large and require considering multiple heterogeneous raw data sources and transformed data values. In the past, we proposed the RMLEditor, a visual graph-based user interface, which allows users to easily create mapping rules for generating Linked Data from raw data. In this paper, we build on top of our existing work: we (i) specify a visual notation for graph visualizations used to represent mapping rules, (ii) introduce an approach for manipulating rules when large visualizations emerge, and (iii) propose an approach to uniformly visualize data fraction of raw data sources combined with an interactive interface for uniform data fraction transformations. We perform two additional comparative user studies. The first one compares the use of the visual notation to present mapping rules to the use of a mapping language directly, which reveals that the visual notation is preferred. The second one compares the use of the graph-based RMLEditor for creating mapping rules to the form-based RMLx Visual Editor, which reveals that graph-based visualizations are preferred to create mapping rules through the use of our proposed visual notation and uniform representation of heterogeneous data sources and data values. (C) 2018 Elsevier B.V. All rights reserved
    corecore