4,474 research outputs found

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    User driven modelling: Visualisation and systematic interaction for end-user programming with tree-based structures

    Get PDF
    This thesis addresses certain problems encountered by teams of engineers when modelling complex structures and processes subject to cost and other resource constraints. The cost of a structure or process may be ‘read off’ its specifying model, but the language in which the model is expressed (e.g. CAD) and the language in which resources may be modelled (e.g. spreadsheets) are not naturally compatible. This thesis demonstrates that a number of intermediate steps may be introduced which enable both meaningful translation from one conceptual view to another as well as meaningful collaboration between team members. The work adopts a diagrammatic modelling approach as a natural one in an engineering context when seeking to establish a shared understanding of problems.Thus, the research question to be answered in this thesis is: ‘To what extent is it possible to improve user-driven software development through interaction with diagrams and without requiring users to learn particular computer languages?’ The goal of the research is to improve collaborative software development through interaction with diagrams, thereby minimising the need for end-users to code directly. To achieve this aim a combination of the paradigms of End-User Programming, Process and Product Modelling and Decision Support, and Semantic Web are exploited and a methodology of User Driven Modelling and Programming (UDM/P) is developed, implemented, and tested as a means of demonstrating the efficacy of diagrammatic modelling.In greater detail, the research seeks to show that diagrammatic modelling eases problems of maintenance, extensibility, ease of use, and sharing of information. The methodology presented here to achieve this involves a three step translation from a visualised ontology, through a modelling tool, to output to interactive visualisations. An analysis of users groups them into categories of system creator, model builder, and model user. This categorisation corresponds well with the three-step translation process where users develop the ontology, modelling tool, and visualisations for their problem.This research establishes and exemplifies a novel paradigm of collaborative end-user programming by domain experts. The end-user programmers can use a visual interface where the visualisation of the software exactly matches the structure of the software itself, making translation between user and computer, and vice versa, much more direct and practical. The visualisation is based on an ontology that provides a representation of the software as a tree. The solution is based on translation from a source tree to a result tree, and visualisation of both. The result tree shows a structured representation of the model with a full visualisation of all parts that leads to the computed result.In conclusion, it is claimed that this direct representation of the structure enables an understanding of the program as an ontology and model that is then visualised, resulting in a more transparent shared understanding by all users. It is further argued that our diagrammatic modelling paradigm consequently eases problems of maintenance, extensibility, ease of use, and sharing of information. This method is applicable to any problem that lends itself to representation as a tree. This is considered a limitation of the method to be addressed in a future project

    E-BioFlow: Different Perspectives on Scientific Workflows

    Get PDF
    We introduce a new type of workflow design system called\ud e-BioFlow and illustrate it by means of a simple sequence alignment workflow. E-BioFlow, intended to model advanced scientific workflows, enables the user to model a workflow from three different but strongly coupled perspectives: the control flow perspective, the data flow perspective, and the resource perspective. All three perspectives are of\ud equal importance, but workflow designers from different domains prefer different perspectives as entry points for their design, and a single workflow designer may prefer different perspectives in different stages of workflow design. Each perspective provides its own type of information, visualisation and support for validation. Combining these three perspectives in a single application provides a new and flexible way of modelling workflows

    BIM semantic-enrichment for built heritage representation

    Get PDF
    In the built heritage context, BIM has shown difficulties in representing and managing the large and complex knowledge related to non-geometrical aspects of the heritage. Within this scope, this paper focuses on a domain-specific semantic-enrichment of BIM methodology, aimed at fulfilling semantic representation requirements of built heritage through Semantic Web technologies. To develop this semantic-enriched BIM approach, this research relies on the integration of a BIM environment with a knowledge base created through information ontologies. The result is knowledge base system - and a prototypal platform - that enhances semantic representation capabilities of BIM application to architectural heritage processes. It solves the issue of knowledge formalization in cultural heritage informative models, favouring a deeper comprehension and interpretation of all the building aspects. Its open structure allows future research to customize, scale and adapt the knowledge base different typologies of artefacts and heritage activities

    User driven modelling : visualisation and systematic interaction for end-user programming with tree-based structures

    Get PDF
    This thesis addresses certain problems encountered by teams of engineers when modelling complex structures and processes subject to cost and other resource constraints. The cost of a structure or process may be ‘read off’ its specifying model, but the language in which the model is expressed (e.g. CAD) and the language in which resources may be modelled (e.g. spreadsheets) are not naturally compatible. This thesis demonstrates that a number of intermediate steps may be introduced which enable both meaningful translation from one conceptual view to another as well as meaningful collaboration between team members. The work adopts a diagrammatic modelling approach as a natural one in an engineering context when seeking to establish a shared understanding of problems. Thus, the research question to be answered in this thesis is: ‘To what extent is it possible to improve user-driven software development through interaction with diagrams and without requiring users to learn particular computer languages?’ The goal of the research is to improve collaborative software development through interaction with diagrams, thereby minimising the need for end-users to code directly. To achieve this aim a combination of the paradigms of End-User Programming, Process and Product Modelling and Decision Support, and Semantic Web are exploited and a methodology of User Driven Modelling and Programming (UDM/P) is developed, implemented, and tested as a means of demonstrating the efficacy of diagrammatic modelling. In greater detail, the research seeks to show that diagrammatic modelling eases problems of maintenance, extensibility, ease of use, and sharing of information. The methodology presented here to achieve this involves a three step translation from a visualised ontology, through a modelling tool, to output to interactive visualisations. An analysis of users groups them into categories of system creator, model builder, and model user. This categorisation corresponds well with the three-step translation process where users develop the ontology, modelling tool, and visualisations for their problem. This research establishes and exemplifies a novel paradigm of collaborative end-user programming by domain experts. The end-user programmers can use a visual interface where the visualisation of the software exactly matches the structure of the software itself, making translation between user and computer, and vice versa, much more direct and practical. The visualisation is based on an ontology that provides a representation of the software as a tree. The solution is based on translation from a source tree to a result tree, and visualisation of both. The result tree shows a structured representation of the model with a full visualisation of all parts that leads to the computed result. In conclusion, it is claimed that this direct representation of the structure enables an understanding of the program as an ontology and model that is then visualised, resulting in a more transparent shared understanding by all users. It is further argued that our diagrammatic modelling paradigm consequently eases problems of maintenance, extensibility, ease of use, and sharing of information. This method is applicable to any problem that lends itself to representation as a tree. This is considered a limitation of the method to be addressed in a future project.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    A linked data approach to sentiment and emotion analysis of twitter in the financial domain

    Get PDF
    Sentiment analysis has recently gained popularity in the financial domain thanks to its capability to predict the stock market based on the wisdom of the crowds. Nevertheless, current sentiment indicators are still silos that cannot be combined to get better insight about the mood of different communities. In this article we propose a Linked Data approach for modelling sentiment and emotions about financial entities. We aim at integrating sentiment information from different communities or providers, and complements existing initiatives such as FIBO. The ap- proach has been validated in the semantic annotation of tweets of several stocks in the Spanish stock market, including its sentiment information

    A Process Modelling Framework Based on Point Interval Temporal Logic with an Application to Modelling Patient Flows

    Get PDF
    This thesis considers an application of a temporal theory to describe and model the patient journey in the hospital accident and emergency (A&E) department. The aim is to introduce a generic but dynamic method applied to any setting, including healthcare. Constructing a consistent process model can be instrumental in streamlining healthcare issues. Current process modelling techniques used in healthcare such as flowcharts, unified modelling language activity diagram (UML AD), and business process modelling notation (BPMN) are intuitive and imprecise. They cannot fully capture the complexities of the types of activities and the full extent of temporal constraints to an extent where one could reason about the flows. Formal approaches such as Petri have also been reviewed to investigate their applicability to the healthcare domain to model processes. Additionally, to schedule patient flows, current modelling standards do not offer any formal mechanism, so healthcare relies on critical path method (CPM) and program evaluation review technique (PERT), that also have limitations, i.e. finish-start barrier. It is imperative to specify the temporal constraints between the start and/or end of a process, e.g., the beginning of a process A precedes the start (or end) of a process B. However, these approaches failed to provide us with a mechanism for handling these temporal situations. If provided, a formal representation can assist in effective knowledge representation and quality enhancement concerning a process. Also, it would help in uncovering complexities of a system and assist in modelling it in a consistent way which is not possible with the existing modelling techniques. The above issues are addressed in this thesis by proposing a framework that would provide a knowledge base to model patient flows for accurate representation based on point interval temporal logic (PITL) that treats point and interval as primitives. These objects would constitute the knowledge base for the formal description of a system. With the aid of the inference mechanism of the temporal theory presented here, exhaustive temporal constraints derived from the proposed axiomatic system’ components serves as a knowledge base. The proposed methodological framework would adopt a model-theoretic approach in which a theory is developed and considered as a model while the corresponding instance is considered as its application. Using this approach would assist in identifying core components of the system and their precise operation representing a real-life domain deemed suitable to the process modelling issues specified in this thesis. Thus, I have evaluated the modelling standards for their most-used terminologies and constructs to identify their key components. It will also assist in the generalisation of the critical terms (of process modelling standards) based on their ontology. A set of generalised terms proposed would serve as an enumeration of the theory and subsume the core modelling elements of the process modelling standards. The catalogue presents a knowledge base for the business and healthcare domains, and its components are formally defined (semantics). Furthermore, a resolution theorem-proof is used to show the structural features of the theory (model) to establish it is sound and complete. After establishing that the theory is sound and complete, the next step is to provide the instantiation of the theory. This is achieved by mapping the core components of the theory to their corresponding instances. Additionally, a formal graphical tool termed as point graph (PG) is used to visualise the cases of the proposed axiomatic system. PG facilitates in modelling, and scheduling patient flows and enables analysing existing models for possible inaccuracies and inconsistencies supported by a reasoning mechanism based on PITL. Following that, a transformation is developed to map the core modelling components of the standards into the extended PG (PG*) based on the semantics presented by the axiomatic system. A real-life case (from the King’s College hospital accident and emergency (A&E) department’s trauma patient pathway) is considered to validate the framework. It is divided into three patient flows to depict the journey of a patient with significant trauma, arriving at A&E, undergoing a procedure and subsequently discharged. Their staff relied upon the UML-AD and BPMN to model the patient flows. An evaluation of their representation is presented to show the shortfalls of the modelling standards to model patient flows. The last step is to model these patient flows using the developed approach, which is supported by enhanced reasoning and scheduling

    Annotations for Rule-Based Models

    Full text link
    The chapter reviews the syntax to store machine-readable annotations and describes the mapping between rule-based modelling entities (e.g., agents and rules) and these annotations. In particular, we review an annotation framework and the associated guidelines for annotating rule-based models of molecular interactions, encoded in the commonly used Kappa and BioNetGen languages, and present prototypes that can be used to extract and query the annotations. An ontology is used to annotate models and facilitate their description

    Developing an open data portal for the ESA climate change initiative

    Get PDF
    We introduce the rationale for, and architecture of, the European Space Agency Climate Change Initiative (CCI) Open Data Portal (http://cci.esa.int/data/). The Open Data Portal hosts a set of richly diverse datasets – 13 “Essential Climate Variables” – from the CCI programme in a consistent and harmonised form and to provides a single point of access for the (>100 TB) data for broad dissemination to an international user community. These data have been produced by a range of different institutions and vary across both scientific and spatio-temporal characteristics. This heterogeneity of the data together with the range of services to be supported presented significant technical challenges. An iterative development methodology was key to tackling these challenges: the system developed exploits a workflow which takes data that conforms to the CCI data specification, ingests it into a managed archive and uses both manual and automatically generated metadata to support data discovery, browse, and delivery services. It utilises both Earth System Grid Federation (ESGF) data nodes and the Open Geospatial Consortium Catalogue Service for the Web (OGC-CSW) interface, serving data into both the ESGF and the Global Earth Observation System of Systems (GEOSS). A key part of the system is a new vocabulary server, populated with CCI specific terms and relationships which integrates OGC-CSW and ESGF search services together, developed as part of a dialogue between domain scientists and linked data specialists. These services have enabled the development of a unified user interface for graphical search and visualisation – the CCI Open Data Portal Web Presence

    Innovating the Construction Life Cycle through BIM/GIS Integration: A Review

    Get PDF
    The construction sector is in continuous evolution due to the digitalisation and integration into daily activities of the building information modelling approach and methods that impact on the overall life cycle. This study investigates the topic of BIM/GIS integration with the adoption of ontologies and metamodels, providing a critical analysis of the existing literature. Ontologies and metamodels share several similarities and could be combined for potential solutions to address BIM/GIS integration for complex tasks, such as asset management, where heterogeneous sources of data are involved. The research adopts a systematic literature review (SLR), providing a formal approach to retrieve scientific papers from dedicated online databases. The results found are then analysed, in order to describe the state of the art and suggest future research paths, which is useful for both researchers and practitioners. From the SLR, it emerged that several studies address ontologies as a promising way to overcome the semantic barriers of the BIM/GIS integration. On the other hand, metamodels (and MDE and MDA approaches, in general) are rarely found in relation to the integration topic. Moreover, the joint application of ontologies and metamodels for BIM/GIS applications is an unexplored field. The novelty of this work is the proposal of the joint application of ontologies and metamodels to perform BIM/GIS integration, for the development of software and systems for asset management
    • 

    corecore