23 research outputs found

    Principled and automated system of systems composition using an ontological architecture

    Get PDF
    A distributed system’s functionality must continuously evolve, especially when environmental context changes. Such required evolution imposes unbearable complexity on system development. An alternative is to make systems able to self-adapt by opportunistically composing at runtime to generate systems of systems (SoSs) that offer value-added functionality. The success of such an approach calls for abstracting the heterogeneity of systems and enabling the programmatic construction of SoSs with minimal developer intervention. We propose a general ontology-based approach to describe distributed systems, seeking to achieve abstraction and enable runtime reasoning between systems. We also propose an architecture for systems that utilizes such ontologies to enable systems to discover and ‘understand’ each other, and potentially compose, all at runtime. We detail features of the ontology and the architecture through three contrasting case studies: one on controlling multiple systems in smart home environment, another on the management of dynamic computing clusters, and a third on autonomic connection of rescue teams. We also quantitatively evaluate the scalability and validity of our approach through experiments and simulations. Our approach enables system developers to focus on high-level SoS composition without being constrained by deployment-specific implementation details. We demonstrate the feasibility of our approach to raise the level of abstraction of SoS construction through reasoned composition at runtime. Our architecture presents a strong foundation for further work due to its generality and extensibility

    Models of everywhere revisited: a technological perspective

    Get PDF
    The concept ‘models of everywhere’ was first introduced in the mid 2000s as a means of reasoning about the environmental science of a place, changing the nature of the underlying modelling process, from one in which general model structures are used to one in which modelling becomes a learning process about specific places, in particular capturing the idiosyncrasies of that place. At one level, this is a straightforward concept, but at another it is a rich multi-dimensional conceptual framework involving the following key dimensions: models of everywhere, models of everything and models at all times, being constantly re-evaluated against the most current evidence. This is a compelling approach with the potential to deal with epistemic uncertainties and nonlinearities. However, the approach has, as yet, not been fully utilised or explored. This paper examines the concept of models of everywhere in the light of recent advances in technology. The paper argues that, when first proposed, technology was a limiting factor but now, with advances in areas such as Internet of Things, cloud computing and data analytics, many of the barriers have been alleviated. Consequently, it is timely to look again at the concept of models of everywhere in practical conditions as part of a trans-disciplinary effort to tackle the remaining research questions. The paper concludes by identifying the key elements of a research agenda that should underpin such experimentation and deployment

    Intermediate CONNECT Architecture

    Get PDF
    Interoperability remains a fundamental challenge when connecting heterogeneous systems which encounter and spontaneously communicate with one another in pervasive computing environments. This challenge is exasperated by the highly heterogeneous technologies employed by each of the interacting parties, i.e., in terms of hardware, operating system, middleware protocols, and application protocols. The key aim of the CONNECT project is to drop this heterogeneity barrier and achieve universal interoperability. Here we report on the activities of WP1 into developing the CONNECT architecture that will underpin this solution. In this respect, we present the following key contributions from the second year. Firstly, the intermediary CONNECT architecture that presents a more concrete view of the technologies and principles employed to enable interoperability between heterogeneous networked systems. Secondly, the design and implementation of the discovery enabler with emphasis on the approaches taken to match compatible networked systems. Thirdly, the realisation of CONNECTors that can be deployed in the environment; we provide domain specific language solutions to generate and translate between middleware protocols. Fourthly, we highlight the role of ontologies within CONNECT and demonstrate how ontologies crosscut all functionality within the CONNECT architecture

    The design and deployment of an end-to-end IoT infrastructure for the natural environment

    Get PDF
    Internet of Things (IoT) systems have seen recent growth in popularity for city and home environments. We report on the design, deployment, and use of the IoT infrastructure for environmental monitoring and management. Working closely with hydrologists, soil scientists, and animal behaviour scientists, we successfully deployed and utilised a system to deliver integrated information across these two fields in the first such example of real-time multidimensional environmental science. We describe the design of this system; its requirements and operational effectiveness for hydrological, soil, and ethological scientists; and our experiences from building, maintaining, and using the deployment at a remote site in difficult conditions. Based on this experience, we discuss key future work for the IoT community when working in these kinds of environmental deployments

    Revised CONNECT Architecture

    Get PDF
    Interoperability remains a fundamental challenge when connecting heterogeneous systems which encounter and spontaneously communicate with one another in pervasive computing environments. This challenge is exasperated by the highly heterogeneous technologies employed by each of the interacting parties, i.e., in terms of hardware, operating system, middleware protocols, and application protocols. The key aim of the CONNECT project is to drop this heterogeneity barrier and achieve universal interoperability. Here we report on the revised CONNECT architecture, highlighting the integration of the work carried out to integrate the CONNECT enablers developed by the different partners; in particular, we present the progress of this work towards a finalised concrete architecture. In the third year this architecture has been enhanced to: i) produce concrete CONNECTors, ii) match networked systems based upon their goals and intent, and iii) use learning technologies to find the affordance of a system. We also report on the application of the CONNECT approach to streaming based systems, further considering exploitation of CONNECT in the mobile environment

    Final CONNECT Architecture

    Get PDF
    Interoperability remains a fundamental challenge when connecting heterogeneous systems which encounter and spontaneously communicate with one another in pervasive computing environments. This challenge is exasperated by the highly heterogeneous technologies employed by each of the interacting parties, i.e., in terms of hardware, operating system, middleware protocols, and application protocols. The key aim of the CONNECT project is to drop this heterogeneity barrier and achieve universal interoperability. Here we report on the revised CONNECT architecture, highlighting the integration of the work carried out to integrate the CONNECT enablers developed by the different partners; in particular, we present the progress of this work towards a finalised concrete architecture. In the third year this architecture has been enhanced to: i) produce concrete CONNECTors, ii) match networked systems based upon their goals and intent, and iii) use learning technologies to find the affordance of a system. We also report on the application of the CONNECT approach to streaming based systems, further considering exploitation of CONNECT in the mobile environment

    Rethinking data‐driven decision support in flood risk management for a big data age

    Get PDF
    Decision‐making in flood risk management is increasingly dependent on access to data, with the availability of data increasing dramatically in recent years. We are therefore moving towards an era of big data, with the added challenges that, in this area, data sources are highly heterogeneous, at a variety of scales, and include a mix of structured and unstructured data. The key requirement is therefore one of integration and subsequent analyses of this complex web of data. This paper examines the potential of a data‐driven approach to support decision‐making in flood risk management, with the goal of investigating a suitable software architecture and associated set of techniques to support a more data‐centric approach. The key contribution of the paper is a cloud‐based data hypercube that achieves the desired level of integration of highly complex data. This hypercube builds on innovations in cloud services for data storage, semantic enrichment and querying, and also features the use of notebook technologies to support open and collaborative scenario analyses in support of decision making. The paper also highlights the success of our agile methodology in weaving together cross‐disciplinary perspectives and in engaging a wide range of stakeholders in exploring possible technological futures for flood risk management
    corecore