3,472 research outputs found

    Sensemaking on the Pragmatic Web: A Hypermedia Discourse Perspective

    Get PDF
    The complexity of the dilemmas we face on an organizational, societal and global scale forces us into sensemaking activity. We need tools for expressing and contesting perspectives flexible enough for real time use in meetings, structured enough to help manage longer term memory, and powerful enough to filter the complexity of extended deliberation and debate on an organizational or global scale. This has been the motivation for a programme of basic and applied action research into Hypermedia Discourse, which draws on research in hypertext, information visualization, argumentation, modelling, and meeting facilitation. This paper proposes that this strand of work shares a key principle behind the Pragmatic Web concept, namely, the need to take seriously diverse perspectives and the processes of meaning negotiation. Moreover, it is argued that the hypermedia discourse tools described instantiate this principle in practical tools which permit end-user control over modelling approaches in the absence of consensus

    SemLinker: automating big data integration for casual users

    Get PDF
    A data integration approach combines data from different sources and builds a unified view for the users. Big data integration inherently is a complex task, and the existing approaches are either potentially limited or invariably rely on manual inputs and interposition from experts or skilled users. SemLinker, an ontology-based data integration system, is part of a metadata management framework for personal data lake (PDL), a personal store-everything architecture. PDL is for casual and unskilled users, therefore SemLinker adopts an automated data integration workflow to minimize manual input requirements. To support the flat architecture of a lake, SemLinker builds and maintains a schema metadata level without involving any physical transformation of data during integration, preserving the data in their native formats while, at the same time, allowing them to be queried and analyzed. Scalability, heterogeneity, and schema evolution are big data integration challenges that are addressed by SemLinker. Large and real-world datasets of substantial heterogeneities are used in evaluating SemLinker. The results demonstrate and confirm the integration efficiency and robustness of SemLinker, especially regarding its capability in the automatic handling of data heterogeneities and schema evolutions

    BlogForever D2.6: Data Extraction Methodology

    Get PDF
    This report outlines an inquiry into the area of web data extraction, conducted within the context of blog preservation. The report reviews theoretical advances and practical developments for implementing data extraction. The inquiry is extended through an experiment that demonstrates the effectiveness and feasibility of implementing some of the suggested approaches. More specifically, the report discusses an approach based on unsupervised machine learning that employs the RSS feeds and HTML representations of blogs. It outlines the possibilities of extracting semantics available in blogs and demonstrates the benefits of exploiting available standards such as microformats and microdata. The report proceeds to propose a methodology for extracting and processing blog data to further inform the design and development of the BlogForever platform

    Ontological Approach to Domain Knowledge Representation for Information Retrieval in Multiagent Systems

    Get PDF
    An ontological representation of buyer interests’ knowledge in process of e-commerce is proposed to use. It makes it more efficient to make a search of the most appropriate sellers via multiagent systems. An algorithm of a comparison of buyer ontology with one of e-shops (the taxonomies) and an e-commerce multiagent system are realised using ontology of information retrieval in distributed environment

    OPPL-Galaxy, a Galaxy tool for enhancing ontology exploitation as part of bioinformatics workflows

    Get PDF
    Biomedical ontologies are key elements for building up the Life Sciences Semantic Web. Reusing and building biomedical ontologies requires flexible and versatile tools to manipulate them efficiently, in particular for enriching their axiomatic content. The Ontology Pre Processor Language (OPPL) is an OWL-based language for automating the changes to be performed in an ontology. OPPL augments the ontologists’ toolbox by providing a more efficient, and less error-prone, mechanism for enriching a biomedical ontology than that obtained by a manual treatment. Results We present OPPL-Galaxy, a wrapper for using OPPL within Galaxy. The functionality delivered by OPPL (i.e. automated ontology manipulation) can be combined with the tools and workflows devised within the Galaxy framework, resulting in an enhancement of OPPL. Use cases are provided in order to demonstrate OPPL-Galaxy’s capability for enriching, modifying and querying biomedical ontologies. Conclusions Coupling OPPL-Galaxy with other bioinformatics tools of the Galaxy framework results in a system that is more than the sum of its parts. OPPL-Galaxy opens a new dimension of analyses and exploitation of biomedical ontologies, including automated reasoning, paving the way towards advanced biological data analyses

    Computational case-based redesign for people with ability impairment: Rethinking, reuse and redesign learning for home modification practice

    Get PDF
    Home modification practice for people with impairments of ability involves redesigning existing residential environments as distinct from the creation of a new dwelling. A redesigner alters existing structures, fittings and fixtures to better meet the occupant's ability requirements. While research on case-based design reasoning and healthcare informatics are well documented, the reasoning and process of redesign and its integration with individual human functional abilities remains poorly understood. Developing a means of capturing redesign knowledge in the form of case documentation online provides a means for integrating and learning from individual case-based redesign episodes where assessment and interventions are naturally linked. A key aim of the research outlined in this thesis was to gain a better understanding of the redesign of spaces for individual human ability with the view to computational modelling. Consequently, the foundational knowledge underpinning the model development includes design, redesign, case-based building design and human functional ability. Case-based redesign as proposed within the thesis, is a method for capturing the redesign context, the residential environment, the modification and the transformational knowledge involved in the redesign. Computational simulation methods are traditionally field dependent. Consequently, part of the research undertaken within this thesis involved the development of a framework for analysing cases within an online case-studies library to validate redesign for individuals and a method of acquiring reuse information so as to be able to estimate the redesign needs of a given population based on either their environment or ability profile. As home modification for people with functional impairments was a novel application field, an explorative action-based methodological approach using computational modelling was needed to underpin a case-based reasoning method. The action-based method involved a process of articulating and examining existing knowledge, suggesting new case-based computational practices, and evaluating the results. This cyclic process led to an improvement cycle that included theory, computational tool development and practical application. The rapid explosion of protocols and online redesign communities that utilise Web technologies meant that a web-based prototype capable of acquiring cases directly from home modification practitioners online and in context was both desirable and achievable. The first online version in 1998-99, encoded home modification redesigns using static WebPages and hyperlinks. This motivated the full-scale more dynamic and robust HMMinfo casestudies prototype whose action-based development is detailed within this thesis. The home modification casestudies library results from the development and integration of a novel case-based redesign model in combination with a Human- Activity-Space computational ontology. These two models are then integrated into a relational database design to enable online case acquisition, browsing, case reuse and redesign learning. The application of the redesign ontology illustrates case reuse and learning, and presents some of the implementation issues and their resolution. Original contributions resulting from this work include: extending case-based design theory to encompass redesign and redesign models, distinguishing the importance of human ability in redesign and the development of the Human-Activity-Space ontology. Additionally all data models were combined and their associated inter-relationships evaluated within a prototype made available to redesign practitioners. v Reflective and practitioner based evaluation contributed enhanced understanding of redesign case contribution dynamics in an online environment. Feedback from redesign practitioners indicated that gaining informed consent to share cases from consumers of home modification and maintenance services, in combination with the additional time required to document a case online, and reticence to go public for fear of critical feedback, all contributed to a less than expected case library growth. This is despite considerable interest in the HMMinfo casestudies website as evidenced by web usage statistics. Additionally the redesign model described in this thesis has practical implications for all design practitioners and educators who seek to create new work by reinterpreting, reconstructing and redesigning spaces

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Semantic Web Representation for Phytochemical Ontology Model

    Get PDF
    Nowadays people are more health conscious; they monitor the ingredients and nutrients of what they eat. Fruits and vegetables, which are rich of phytochemicals, are always chosen as a good diet. Phytochemicals are rich of nutrients and can give health benefits to the takers. Previous research has modelled the phytochemicals into its chemical structure and colours according to group of fruits and vegetables ontologically. However, there is no semantic web representation of that ontology model that makes the information more sharable among users. Therefore, in this paper, we develop a semantic web for phytochemical ontology model by linking the user interface to ontology model using JENA framework. The data from the ontology is read by using SPARQL query to display information to the front-end user. By having this semantic web representation, it is hoped that the knowledge is more accessible and shareable among intended users
    corecore