15,711 research outputs found

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Using formal models to design user interfaces a case study

    Get PDF
    The use of formal models for user interface design can provide a number of benefits. It can help to ensure consistency across designs for multiple platforms, prove properties such as reachability and completeness and, perhaps most importantly, can help incorporate the user interface design process into a larger, formally-based, software development process. Often, descriptions of such models and examples are presented in isolation from real-world practice in order to focus on particular benefits, small focused examples or the general methodology. This paper presents a case study of developing the user interface to a new software application using a particular pair of formal models, presentation models and presentation interaction models. The aim of this study was to practically apply the use of formal models to the design process of a UI for a new software application. We wanted to determine how easy it would be to integrate such models into our usual development process and to find out what the benefits, and difficulties, of using such models were. We will show how we used the formal models within a user-centred design process, discuss what effect they had on this process and explain what benefits we perceived from their use

    Metadata enrichment for digital heritage: users as co-creators

    Get PDF
    This paper espouses the concept of metadata enrichment through an expert and user-focused approach to metadata creation and management. To this end, it is argued the Web 2.0 paradigm enables users to be proactive metadata creators. As Shirky (2008, p.47) argues Web 2.0’s social tools enable “action by loosely structured groups, operating without managerial direction and outside the profit motive”. Lagoze (2010, p. 37) advises, “the participatory nature of Web 2.0 should not be dismissed as just a popular phenomenon [or fad]”. Carletti (2016) proposes a participatory digital cultural heritage approach where Web 2.0 approaches such as crowdsourcing can be sued to enrich digital cultural objects. It is argued that “heritage crowdsourcing, community-centred projects or other forms of public participation”. On the other hand, the new collaborative approaches of Web 2.0 neither negate nor replace contemporary standards-based metadata approaches. Hence, this paper proposes a mixed metadata approach where user created metadata augments expert-created metadata and vice versa. The metadata creation process no longer remains to be the sole prerogative of the metadata expert. The Web 2.0 collaborative environment would now allow users to participate in both adding and re-using metadata. The case of expert-created (standards-based, top-down) and user-generated metadata (socially-constructed, bottom-up) approach to metadata are complementary rather than mutually-exclusive. The two approaches are often mistakenly considered as dichotomies, albeit incorrectly (Gruber, 2007; Wright, 2007) . This paper espouses the importance of enriching digital information objects with descriptions pertaining the about-ness of information objects. Such richness and diversity of description, it is argued, could chiefly be achieved by involving users in the metadata creation process. This paper presents the importance of the paradigm of metadata enriching and metadata filtering for the cultural heritage domain. Metadata enriching states that a priori metadata that is instantiated and granularly structured by metadata experts is continually enriched through socially-constructed (post-hoc) metadata, whereby users are pro-actively engaged in co-creating metadata. The principle also states that metadata that is enriched is also contextually and semantically linked and openly accessible. In addition, metadata filtering states that metadata resulting from implementing the principle of enriching should be displayed for users in line with their needs and convenience. In both enriching and filtering, users should be considered as prosumers, resulting in what is called collective metadata intelligence

    Designing relational pedagogies with jam2jamXO

    Get PDF
    This paper examines the affordances of the philosophy and practice of open source and the application of it in developing music education software. In particular I will examine the parallels inherent in the ‘openness’ of pragmatist philosophy in education (Dewey 1916, 1989) such as group or collaborative learning, discovery learning (Bruner 1966) and learning through creative activity with computers (Papert 1980, 1994). Primarily I am interested in ‘relational pedagogies’ (Ruthmann and Dillon In Press) which is in a real sense about the ethics of the transaction between student and teacher in an ecology where technology plays a more significant role. In these contexts relational pedagogies refers to how the music teacher manages their relationships with students and evaluates the affordances of open source technology in that process. It is concerned directly with how the relationship between student and teacher is affected by the technological tools, as is the capacity for music making and learning. In particular technologies that have agency present the opportunity for a partnership between user and technology that enhances the capacity for expressive music making, productive social interaction and learning. In this instance technologies with agency are defined as ones that enhance the capacity to be expressive and perform tasks with virtuosity and complexity where the technology translates simple commands and gestures into complex outcomes. The technology enacts a partnership with the user that becomes both a cognitive and performative amplifier. Specifically we have used this term to describe interactions with generative technologies that use procedural invention as a creative technique to produce music and visual media

    Collaborative method to maintain business process models updated

    Get PDF
    Business process models are often forgotten after their creation and its representation is not usually updated. This appears to be negative as processes evolve over time. This paper discusses the issue of business process models maintenance through the definition of a collaborative method that creates interaction contexts enabling business actors to discuss about business processes, sharing business knowledge. The collaboration method extends the discussion about existing process representations to all stakeholders promoting their update. This collaborative method contributes to improve business process models, allowing updates based in change proposals and discussions, using a groupware tool that was developed. Four case studies were developed in real organizational environment. We came to the conclusion that the defined method and the developed tool can help organizations to maintain a business process model updated based on the inputs and consequent discussions taken by the organizational actors who participate in the processes.info:eu-repo/semantics/publishedVersio

    The experience as a document: designing for the future of collaborative remembering in digital archives

    Get PDF
    How does it feel when we remember together on-line? Who gets to say what it is worth to be remembered? To understand how the user experience of participation is affecting the formation of collective memories in the context of online environments, first it is important to take into consideration how the notion of memory has been transformed under the influence of the digital revolution. I aim to contribute to the field of User Experience (UX) research theorizing on the felt experience of users from a memory perspective, taking into consideration aspects linked to both personal and collective memories in the context of connected environments.Harassment and hate speech in connected conversational environments are specially targeted to women and underprivileged communities, which has become a problem for digital archives of vernacular creativity (Burgess, J. E. 2007) such as YouTube, Twitter, Reddit and Wikipedia. An evaluation of the user experience of underprivileged communities in creative archives such as Wikipedia indicates the urgency for building a feminist space where women and queer folks can focus on knowledge production and learning without being harassed. The theoretical models and designs that I propose are a result of a series of prototype testing and case studies focused on cognitive tools for a mediated human memory operating inside transactive memory systems. With them, aims to imagine the means by which feminist protocols for UX design and research can assist in the building and maintenance of the archive as a safe/brave space.Working with perspectives from media theory, memory theory and gender studies and centering the user experience of participation for women, queer folks, people of colour (POC) and other vulnerable and underrepresented communities as the main focus of inquiring, my research takes an interdisciplinary approach to interrogate how online misogyny and other forms of abuse are perceived by communities placed outside the center of the hegemonic normativity, and how the user experience of online abuse is affecting the formation of collective memories in the context of online environments

    ALT-C 2010 - Conference Proceedings

    Get PDF

    Developing Collaborative XML Editing Systems

    Get PDF
    In many areas the eXtensible Mark-up Language (XML) is becoming the standard exchange and data format. More and more applications not only support XML as an exchange format but also use it as their data model or default file format for graphic, text and database (such as spreadsheet) applications. Computer Supported Cooperative Work is an interdisciplinary field of research dealing with group work, cooperation and their supporting information and communication technologies. One part of it is Real-Time Collaborative Editing, which investigates the design of systems which allow several persons to work simultaneously in real-time on the same document, without the risk of inconsistencies. Existing collaborative editing research applications specialize in one or at best, only a small number of document types; for example graphic, text or spreadsheet documents. This research investigates the development of a software framework which allows collaborative editing of any XML document type in real-time. This presents a more versatile solution to the problems of real-time collaborative editing. This research contributes a new software framework model which will assist software engineers in the development of new collaborative XML editing applications. The devised framework is flexible in the sense that it is easily adaptable to different workflow requirements covering concurrency control, awareness mechanisms and optional locking of document parts. Additionally this thesis contributes a new framework integration strategy that enables enhancements of existing single-user editing applications with real-time collaborative editing features without changing their source code

    A collaborative platform for integrating and optimising Computational Fluid Dynamics analysis requests

    Get PDF
    A Virtual Integration Platform (VIP) is described which provides support for the integration of Computer-Aided Design (CAD) and Computational Fluid Dynamics (CFD) analysis tools into an environment that supports the use of these tools in a distributed collaborative manner. The VIP has evolved through previous EU research conducted within the VRShips-ROPAX 2000 (VRShips) project and the current version discussed here was developed predominantly within the VIRTUE project but also within the SAFEDOR project. The VIP is described with respect to the support it provides to designers and analysts in coordinating and optimising CFD analysis requests. Two case studies are provided that illustrate the application of the VIP within HSVA: the use of a panel code for the evaluation of geometry variations in order to improve propeller efficiency; and, the use of a dedicated maritime RANS code (FreSCo) to improve the wake distribution for the VIRTUE tanker. A discussion is included detailing the background, application and results from the use of the VIP within these two case studies as well as how the platform was of benefit during the development and a consideration of how it can benefit HSVA in the future
    corecore