26 research outputs found

    To boldly go:an occam-π mission to engineer emergence

    Get PDF
    Future systems will be too complex to design and implement explicitly. Instead, we will have to learn to engineer complex behaviours indirectly: through the discovery and application of local rules of behaviour, applied to simple process components, from which desired behaviours predictably emerge through dynamic interactions between massive numbers of instances. This paper describes a process-oriented architecture for fine-grained concurrent systems that enables experiments with such indirect engineering. Examples are presented showing the differing complex behaviours that can arise from minor (non-linear) adjustments to low-level parameters, the difficulties in suppressing the emergence of unwanted (bad) behaviour, the unexpected relationships between apparently unrelated physical phenomena (shown up by their separate emergence from the same primordial process swamp) and the ability to explore and engineer completely new physics (such as force fields) by their emergence from low-level process interactions whose mechanisms can only be imagined, but not built, at the current time

    Engineering simulations for cancer systems biology

    Get PDF
    Computer simulation can be used to inform in vivo and in vitro experimentation, enabling rapid, low-cost hypothesis generation and directing experimental design in order to test those hypotheses. In this way, in silico models become a scientific instrument for investigation, and so should be developed to high standards, be carefully calibrated and their findings presented in such that they may be reproduced. Here, we outline a framework that supports developing simulations as scientific instruments, and we select cancer systems biology as an exemplar domain, with a particular focus on cellular signalling models. We consider the challenges of lack of data, incomplete knowledge and modelling in the context of a rapidly changing knowledge base. Our framework comprises a process to clearly separate scientific and engineering concerns in model and simulation development, and an argumentation approach to documenting models for rigorous way of recording assumptions and knowledge gaps. We propose interactive, dynamic visualisation tools to enable the biological community to interact with cellular signalling models directly for experimental design. There is a mismatch in scale between these cellular models and tissue structures that are affected by tumours, and bridging this gap requires substantial computational resource. We present concurrent programming as a technology to link scales without losing important details through model simplification. We discuss the value of combining this technology, interactive visualisation, argumentation and model separation to support development of multi-scale models that represent biologically plausible cells arranged in biologically plausible structures that model cell behaviour, interactions and response to therapeutic interventions

    Investigating communicating sequential processes for Java to support ubiquitous computing

    Get PDF
    Ubiquitous Computing promises to enrich our everyday lives by enabling the environment to be enhanced via computational elements. These elements are designed to augment and support our lives, thus allowing us to perform our tasks and goals. The main facet of Ubiquitous Computing is that computational devices are embedded in the environment, and interact with users and themselves to provide novel and unique applications. Ubiquitous Computing requires an underlying architecture that helps to promote and control the dynamic properties and structures that the applications require. In this thesis, the Networking package of Communicating Sequential Processes for Java (JCSP) is examined to analyse its suitability as the underlying architecture for Ubiquitous Computing. The reason to use JCSP Networking as a case study is that one of the proposed models for Ubiquitous Computing, the ?-Calculus, has the potential to have its abstractions implemented within JCSP Networking. This thesis examines some of the underlying properties of JCSP Networking and examines them within the context of Ubiquitous Computing. There is also an examination into the possibility of implementing the mobility constructs of the ?-Calculus and similar mobility models within JCSP Networking. It has been found that some of the inherent properties of Java and JCSP Networking do cause limitations, and hence a generalisation of the architecture has been made that should provide greater suitability of the ideas behind JCSP Networking to support Ubiquitous Computing. The generalisation has resulted in the creation of a verified communication protocol that can be applied to any Communicating Process Architecture

    Ontology based knowledge formulation and an interpretation engine for intelligent devices in pervasive environments

    Get PDF
    Ongoing device miniaturization makes it possible to manufacture very small devices; therefore more of them can be embedded in one space. Pervasive computing con- cepts, envisioning computers distributed in a space and hidden from users' sight, presented by Weiser in 1991 are becoming more realistic and feasible to implement. A technology supporting pervasive computing and Ambient Intelligence also needs to follow miniaturization. The Ambient Intelligence domain was mainly focused on supercomputers with large computation power and it is now moving towards smaller devices, with limited computation power, and takes inspiration from dis- tributed systems, ad-hoc networks and emergent computing. The ability to process knowledge, understand network protocols, adapt and learn is becoming a required capability from fairly small and energy-frugal devices. This research project con- sists of two main parts. The first part of the project has created a context aware generic knowledgebase interpretation engine that enables autonomous devices to pervasively manage smart spaces using Communicating Sequential Processes as the underlying design methodology. In the second part a knowledgebase containing all the information that is needed for a device to cooperate, make decisions and react was designed and constructed. The interpretation engine is designed to be suitable for devices from different vendors, as it enables semantic interoperability based on the use of ontologies. The knowledge, that the engine interprets, is drawn from an ontology and the model of the chosen ontology is fixed in the engine. This project has investigated, designed and built a prototype of the knowledge base interpretation engine. Functional testing was performed using a simulation implemented in JCSP. The implementation simulates many autonomous devices running in parallel, communicating using a broadcast-based protocol, self-organizing into sub-networks and reacting to users' requests. The main goal of the project was to design and investigate the knowledge interpretation engine, determine the number of functions that the engine performs, to enable hardware realisation, and investigate the knowledgebase represented with use of RDF triples and chosen ontology model. This project was undertaken in collaboration with NXP Semiconductor Research Eindhoven, The Netherlands.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Ontology based knowledge formulation and an interpretation engine for intelligent devices in pervasive environments

    Get PDF
    Ongoing device miniaturization makes it possible to manufacture very small devices; therefore more of them can be embedded in one space. Pervasive computing concepts, envisioning computers distributed in a space and hidden from users' sight, presented by Weiser in 1991 are becoming more realistic and feasible to implement.A technology supporting pervasive computing and Ambient Intelligence also needs to follow miniaturization. The Ambient Intelligence domain was mainly focused on supercomputers with large computation power and it is now moving towards smaller devices, with limited computation power, and takes inspiration from distributed systems, ad-hoc networks and emergent computing. The ability to process knowledge, understand network protocols, adapt and learn is becoming a required capability from fairly small and energy-frugal devices. This research project consists of two main parts. The first part of the project has created a context aware generic knowledgebase interpretation engine that enables autonomous devices to pervasively manage smart spaces using Communicating Sequential Processes as the underlying design methodology. In the second part a knowledgebase containing all the information that is needed for a device to cooperate, make decisions and react was designed and constructed. The interpretation engine is designed to be suitable for devices from different vendors, as it enables semantic interoperability based on the use of ontologies. The knowledge, that the engine interprets, is drawn from an ontology and the model of the chosen ontology is fixed in the engine. This project has investigated, designed and built a prototype of the knowledge base interpretation engine. Functional testing was performed using a simulation implemented in JCSP. The implementation simulates many autonomous devices running in parallel, communicating using a broadcast-based protocol, self-organizing into sub-networks and reacting to users' requests. The main goal of the project was to design and investigate the knowledge interpretation engine, determine the number of functions that the engine performs, to enable hardware realisation, and investigate the knowledgebase represented with use of RDF triples and chosen ontology model. This project was undertaken in collaboration with NXP Semiconductor Research Eindhoven, The Netherlands

    Semantic interoperability in ad-hoc computing environments

    Get PDF
    This thesis introduces a novel approach in which multiple heterogeneous devices collaborate to provide useful applications in an ad-hoc network. This thesis proposes a smart home as a particular ubiquitous computing scenario considering all the requirements given by the literature for succeed in this kind of systems. To that end, we envision a horizontally integrated smart home built up from independent components that provide services. These components are described with enough syntactic, semantic and pragmatic knowledge to accomplish spontaneous collaboration. The objective of these collaboration is domestic use, that is, the provision of valuable services for home residents capable of supporting users in their daily activities. Moreover, for the system to be attractive for potential customers, it should offer high levels of trust and reliability, all of them not at an excessive price. To achieve this goal, this thesis proposes to study the synergies available when an ontological description of home device functionality is paired with a formal method. We propose an ad-hoc home network in which components are home devices modelled as processes represented as semantic services by means of the Web Service Ontology (OWL-S). In addition, such services are specified, verified and implemented by means of the Communicating Sequential Processes (CSP), a process algebra for describing concurrent systems. The utilisation of an ontology brings the desired levels of knowledge for a system to compose services in a ad-hoc environment. Services are composed by a goal based system in order to satisfy user needs. Such system is capable of understaning, both service representations and user context information. Furthermore, the inclusion of a formal method contributes with additional semantics to check that such compositions will be correctly implemented and executed, achieving the levels of reliability and costs reduction (costs derived form the design, development and implementation of the system) needed for a smart home to succeed.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Dynamics and pragmatics for high performance concurrency

    Get PDF
    This thesis is concerned with support at all levels for building highly concurrent and dynamic parallel processing systems. The CSP model of concurrency, as (largely) embodied in the occam programming language is used due to its simplicity, expressiveness, architecture- independent nature, and potential for high performance. Additionally, occam provides guarantees regarding freedom from aliasing and race-hazard error. This thesis addresses one of the grand challenges of present day computer science: providing a software technology that offers the dynamic flexibility and performance of mainstream object oriented environments with the level of safety, formal analysis, modularity and lightweight concurrency offered by CSP/occam. Two approaches to this challenge are possible: do something to make the mainstream languages (e.g. Java, C++) safe, or make occam dynamic -- without compromising its existing good properties. This thesis follows the latter route. The first part of this thesis concentrates on enhancing the occam language and run-time system, on a commodity platform (IBM PC) running the freely available Linux operating system. After a brief introduction to the various components of the kroc occam system, additions and extensions to the occam programming language and supporting run-time system are examined. These provide a greater degree of programming flexibility in occam (for example, by adding support for dynamic allocation, mobile semantics and dynamic network construction), without compromising the safety of programs which use them. Benchmarks are reported that demonstrate significant improvements in performance (for example, channel communication in tens of nano-seconds). The second part concentrates on improving the level of interaction between occam programs and the OS environment. Providing easy access to sockets and networking, for example. This thesis concludes with a discussion of the work presented herein, with consideration given to parallels with object-oriented languages. Also described are details of ongoing and potential future research. The modified language grammar, details of new compiler generated code, and miscellany are provided in the appendices

    Dynamics and pragmatics for high performance concurrency

    Get PDF
    This thesis is concerned with support at all levels for building highly concurrent and dynamic parallel processing systems. The CSP model of concurrency, as (largely) embodied in the occam programming language is used due to its simplicity, expressiveness, architecture- independent nature, and potential for high performance. Additionally, occam provides guarantees regarding freedom from aliasing and race-hazard error. This thesis addresses one of the grand challenges of present day computer science: providing a software technology that offers the dynamic flexibility and performance of mainstream object oriented environments with the level of safety, formal analysis, modularity and lightweight concurrency offered by CSP/occam. Two approaches to this challenge are possible: do something to make the mainstream languages (e.g. Java, C++) safe, or make occam dynamic -- without compromising its existing good properties. This thesis follows the latter route. The first part of this thesis concentrates on enhancing the occam language and run-time system, on a commodity platform (IBM PC) running the freely available Linux operating system. After a brief introduction to the various components of the kroc occam system, additions and extensions to the occam programming language and supporting run-time system are examined. These provide a greater degree of programming flexibility in occam (for example, by adding support for dynamic allocation, mobile semantics and dynamic network construction), without compromising the safety of programs which use them. Benchmarks are reported that demonstrate significant improvements in performance (for example, channel communication in tens of nano-seconds). The second part concentrates on improving the level of interaction between occam programs and the OS environment. Providing easy access to sockets and networking, for example. This thesis concludes with a discussion of the work presented herein, with consideration given to parallels with object-oriented languages. Also described are details of ongoing and potential future research. The modified language grammar, details of new compiler generated code, and miscellany are provided in the appendices.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    NFComms: A synchronous communication framework for the CPU-NFP heterogeneous system

    Get PDF
    This work explores the viability of using a Network Flow Processor (NFP), developed by Netronome, as a coprocessor for the construction of a CPU-NFP heterogeneous platform in the domain of general processing. When considering heterogeneous platforms involving architectures like the NFP, the communication framework provided is typically represented as virtual network interfaces and is thus not suitable for generic communication. To enable a CPU-NFP heterogeneous platform for use in the domain of general computing, a suitable generic communication framework is required. A feasibility study for a suitable communication medium between the two candidate architectures showed that a generic framework that conforms to the mechanisms dictated by Communicating Sequential Processes is achievable. The resulting NFComms framework, which facilitates inter- and intra-architecture communication through the use of synchronous message passing, supports up to 16 unidirectional channels and includes queuing mechanisms for transparently supporting concurrent streams exceeding the channel count. The framework has a minimum latency of between 15.5 μs and 18 μs per synchronous transaction and can sustain a peak throughput of up to 30 Gbit/s. The framework also supports a runtime for interacting with the Go programming language, allowing user-space processes to subscribe channels to the framework for interacting with processes executing on the NFP. The viability of utilising a heterogeneous CPU-NFP system for use in the domain of general and network computing was explored by introducing a set of problems or applications spanning general computing, and network processing. These were implemented on the heterogeneous architecture and benchmarked against equivalent CPU-only and CPU/GPU solutions. The results recorded were used to form an opinion on the viability of using an NFP for general processing. It is the author’s opinion that, beyond very specific use cases, it appears that the NFP-400 is not currently a viable solution as a coprocessor in the field of general computing. This does not mean that the proposed framework or the concept of a heterogeneous CPU-NFP system should be discarded as such a system does have acceptable use in the fields of network and stream processing. Additionally, when comparing the recorded limitations to those seen during the early stages of general purpose GPU development, it is clear that general processing on the NFP is currently in a similar state
    corecore