16 research outputs found

    Constructing runtime models with bigraphs to address ubiquitous computing service composition volatility

    Get PDF
    In this thesis, we explore the appropriateness of the language abstractions provided by Bigraphs to construct a model at runtime to tackle the problem of volatility in a service composition running on a mobile device. Our contributions to knowledge are as follows: 1) We have shown that Bigraphs (Milner, 2009) are suitable for expressing models at runtime. 2) We have offered Bigraph language abstractions as an appropriate solution to some of the research problems posed by the models at runtime community (Aßmann et al., 2012). 3) We have discussed the general lessons learnt from using Bigraphs for a practical application such as a model at runtime. 4) We have discussed the general lessons learnt from our experiences of designing models at runtime. 5) We have implemented the model at runtime using the BPL Tool (ITU, 2011) and have experimentally studied the response times of our Bigraphical model. We have suggested appropriate enhancements for the tool based on our experiences. We present techniques to parameterize the reaction rules so that the matching algorithm of the BPL Tool returns a single match giving us the ability to dynamically program the model at runtime. We also show how to query the Bigraph structure

    Resilience Model for Teams of Autonomous Unmanned Aerial Vehicles (UAV) Executing Surveillance Missions

    Get PDF
    Teams of low-cost Unmanned Aerial Vehicles (UAVs) have gained acceptance as an alternative for cooperatively searching and surveilling terrains. These UAVs are assembled with low-reliability components, so unit failures are possible. Losing UAVs to failures decreases the team\u27s coverage efficiency and impacts communication, given that UAVs are also communication nodes. Such is the case of a Flying Ad Hoc Network (FANET), where the failure of a communication node may isolate segments of the network covering several nodes. The main goal of this study is to develop a resilience model that would allow us to analyze the effects of individual UAV failures on the team\u27s performance to improve the team\u27s resilience. The proposed solution models and simulates the UAV team using Agent-Based Modeling and Simulation. UAVs are modeled as autonomous agents, and the searched terrain as a two-dimensional M x N grid. Communication between agents permits having the exact data on the transit and occupation of all cells in real time. Such communication allows the UAV agents to estimate the best alternatives to move within the grid and know the exact number of all agents\u27 visits to the cells. Each UAV is simulated as a hobbyist, fixed-wing airplane equipped with a generic set of actuators and a generic controller. Individual UAV failures are simulated following reliability Fault Trees. Each affected UAV is disabled and eliminated from the pool of active units. After each unit failure, the system generates a new topology. It produces a set of minimum-distance trees for each node (UAV) in the grid. The new trees will thus depict the rearrangement links as required after a node failure or if changes occur in the topology due to node movement. The model should generate parameters such as the number and location of compromised nodes, performance before and after the failure, and the estimated time of restitution needed to model the team\u27s resilience. The study addresses three research goals: identifying appropriate tools for modeling UAV scenarios, developing a model for assessing UAVs team resilience that overcomes previous studies\u27 limitations, and testing the model through multiple simulations. The study fills a gap in the literature as previous studies focus on system communication disruptions (i.e., node failures) without considering UAV unit reliability. This consideration becomes critical as using small, low-cost units prone to failure becomes widespread

    A component-based framework for certification of components in a cloud of HPC services

    Get PDF
    HPC Shelfis a proposal of a cloud computing platform to provide component-oriented services for High Performance Computing (HPC) applications. This paper presents a Verification-as-a-Service (VaaS) framework for component certification onHPC Shelf. Certification is aimed at providing higher confidence that components of parallel computing systems ofHPC Shelfbehave as expected according to one or more requirements expressed in their contracts. To this end, new abstractions are introduced, starting with certifier components. They are designed to inspect other components and verify them for different types of functional, non-functional and behavioral requirements. The certification framework is naturally based on parallel computing techniques to speed up verification tasks.NORTE-01-0145- FEDER-000037

    Capturing functional and non-functional connector

    Get PDF
    The CONNECT Integrated Project aims to develop a novel networking infrastructure that will support composition of networked systems with on-the-fly connector synthesis. The role of this work package is to investigate the foundations and verification methods for composable connectors. In this deliverable, we set the scene for the formulation of the modelling framework by surveying existing connector modelling formalisms. We covered not only classical connector algebra formalisms, but also, where appropriate, their corresponding quantitative extensions. All formalisms have been evaluated against a set of key dimensions of interest agreed upon in the CONNECT project. Based on these investigations, we concluded that none of the modelling formalisms available at present satisfy our eight dimensions. We will use the outcome of the survey to guide the formulation of a compositional modelling formalism tailored to the specific requirements of the CONNECT project. Furthermore, we considered the range of non-functional properties that are of interest to CONNECT, and reviewed existing specification formalisms for capturing them, together with the corresponding modelchecking algorithms and tool support. Consequently, we described the scientific advances concerning model-checking algorithms and tools, which are partial contribution towards future deliverables: an approach for online verification (part of D2.2), automated abstraction-refinement for probabilistic realtime systems (part of D2.2 and D2.4), and compositional probabilistic verification within PRISM, to serve as a foundation of future research on quantitative assume-guarantee compositional reasoning (part of D2.2 and D2.4)

    Bigraphical Languages and their Simulation

    Get PDF

    A linguistic approach to concurrent, distributed, and adaptive programming across heterogeneous platforms

    Get PDF
    Two major trends in computing hardware during the last decade have been an increase in the number of processing cores found in individual computer hardware platforms and an ubiquity of distributed, heterogeneous systems. Together, these changes can improve not only the performance of a range of applications, but the types of applications that can be created. Despite the advances in hardware technology, advances in programming of such systems has not kept pace. Traditional concurrent programming has always been challenging, and is only set to be come more so as the level of hardware concurrency increases. The different hardware platforms which make up heterogeneous systems come with domain-specific programming models, which are not designed to interact, or take into account the different resource-constraints present across different hardware devices, motivating a need for runtime reconfiguration or adaptation. This dissertation investigates the actor model of computation as an appropriate abstraction to address the issues present in programming concurrent, distributed, and adaptive applications across different scales and types of computing hardware. Given the limitations of other approaches, this dissertation describes a new actor-based programming language (Ensemble) and its runtime to address these challenges. The goal of this language is to enable non-specialist programmers to take advantage of parallel, distributed, and adaptive programming without the programmer requiring in-depth knowledge of hardware architectures or software frameworks. There is also a description of the design and implementation of the runtime system which executes Ensemble applications across a range of heterogeneous platforms. To show the suitability of the actor-based abstraction in creating applications for such systems, the language and runtime were evaluated in terms of linguistic complexity and performance. These evaluations covered programming embedded, concurrent, distributed, and adaptable applications, as well as combinations thereof. The results show that the actor provides an objectively simple way to program such systems without sacrificing performance

    Requirements Engineering of Context-Aware Applications

    Get PDF
    Context-aware computing envisions a new generation of smart applications that have the ability to perpetually sense the user’s context and use these data to make adaptation decision in response to changes in the user’s context so as to provide timely and personalized services anytime and anywhere. Unlike the traditional distribution systems where the network topology is fixed and wired, context-aware computing systems are mostly based on wireless communication due to the mobility of the network nodes; hence the network topology is not fixed but changes dynamically in an unpredictable manner as nodes join and the leave network, in addition to the fact that wireless communication is unstable. These factors make the design and development of context-aware computing systems much more challenging, as the system requirements change depending on the context of use. The Unified Modelling Language (UML) is a graphical language commonly used to specify, visualize, construct, and document the artefacts of software-intensive systems. However, UML is an all-purpose modelling language and does not have notations to distinguish context-awareness requirements from other system requirements. This is critical for the specification, visualization, construction and documentation of context-aware computing systems because context-awareness requirements are highly important in these systems. This thesis proposes an extension of UML diagrams to cater for the specification, visualization, construction and documentation of context-aware computing systems where new notations are introduced to model context-awareness requirements distinctively from other system requirements. The contributions of this work can be summarized as follows: (i) A context-aware use case diagram is a new notion which merges into a single diagram the traditional use case diagram (that describes the functions of an application) and the use context diagram, which specifies the context information upon which the behaviours of these functions depend. (ii) A Novel notion known as a context-aware activity diagram is presented, which extends the traditional UML activity diagrams to enable the representation of context objects, context constraints and adaptation activities. Context constraints express conditions upon context object attributes that trigger adaptation activities; adaptation activities are activities that must be performed in response to specific changes in the system’s context. (iii) A novel notion known as the context-aware class diagram is presented, which extends the traditional UML class diagrams to enable the representation of context information that affect the behaviours of a class. A new relationship, called utilisation, between a UML class and a context class is used to model context objects; meaning that the behaviours of the UML class depend upon the context information represented by the context class. Hence a context-aware class diagram is a rich and expressive language that distinctively depicts both the structure of classes and that of the contexts upon which they depend. The pragmatics of the proposed approach are demonstrated using two real-world case studies

    Paradoxes of interactivity: perspectives for media theory, human-computer interaction, and artistic investigations

    Get PDF
    Current findings from anthropology, genetics, prehistory, cognitive and neuroscience indicate that human nature is grounded in a co-evolution of tool use, symbolic communication, social interaction and cultural transmission. Digital information technology has recently entered as a new tool in this co-evolution, and will probably have the strongest impact on shaping the human mind in the near future. A common effort from the humanities, the sciences, art and technology is necessary to understand this ongoing co- evolutionary process. Interactivity is a key for understanding the new relationships formed by humans with social robots as well as interactive environments and wearables underlying this process. Of special importance for understanding interactivity are human-computer and human-robot interaction, as well as media theory and New Media Art. "Paradoxes of Interactivity" brings together reflections on "interactivity" from different theoretical perspectives, the interplay of science and art, and recent technological developments for artistic applications, especially in the realm of sound

    Agoric computation: trust and cyber-physical systems

    Get PDF
    In the past two decades advances in miniaturisation and economies of scale have led to the emergence of billions of connected components that have provided both a spur and a blueprint for the development of smart products acting in specialised environments which are uniquely identifiable, localisable, and capable of autonomy. Adopting the computational perspective of multi-agent systems (MAS) as a technological abstraction married with the engineering perspective of cyber-physical systems (CPS) has provided fertile ground for designing, developing and deploying software applications in smart automated context such as manufacturing, power grids, avionics, healthcare and logistics, capable of being decentralised, intelligent, reconfigurable, modular, flexible, robust, adaptive and responsive. Current agent technologies are, however, ill suited for information-based environments, making it difficult to formalise and implement multiagent systems based on inherently dynamical functional concepts such as trust and reliability, which present special challenges when scaling from small to large systems of agents. To overcome such challenges, it is useful to adopt a unified approach which we term agoric computation, integrating logical, mathematical and programming concepts towards the development of agent-based solutions based on recursive, compositional principles, where smaller systems feed via directed information flows into larger hierarchical systems that define their global environment. Considering information as an integral part of the environment naturally defines a web of operations where components of a systems are wired in some way and each set of inputs and outputs are allowed to carry some value. These operations are stateless abstractions and procedures that act on some stateful cells that cumulate partial information, and it is possible to compose such abstractions into higher-level ones, using a publish-and-subscribe interaction model that keeps track of update messages between abstractions and values in the data. In this thesis we review the logical and mathematical basis of such abstractions and take steps towards the software implementation of agoric modelling as a framework for simulation and verification of the reliability of increasingly complex systems, and report on experimental results related to a few select applications, such as stigmergic interaction in mobile robotics, integrating raw data into agent perceptions, trust and trustworthiness in orchestrated open systems, computing the epistemic cost of trust when reasoning in networks of agents seeded with contradictory information, and trust models for distributed ledgers in the Internet of Things (IoT); and provide a roadmap for future developments of our research

    Paradoxes of Interactivity

    Get PDF
    Current findings from anthropology, genetics, prehistory, cognitive and neuroscience indicate that human nature is grounded in a co-evolution of tool use, symbolic communication, social interaction and cultural transmission. Digital information technology has recently entered as a new tool in this co-evolution, and will probably have the strongest impact on shaping the human mind in the near future. A common effort from the humanities, the sciences, art and technology is necessary to understand this ongoing co- evolutionary process. Interactivity is a key for understanding the new relationships formed by humans with social robots as well as interactive environments and wearables underlying this process. Of special importance for understanding interactivity are human-computer and human-robot interaction, as well as media theory and New Media Art. »Paradoxes of Interactivity« brings together reflections on »interactivity« from different theoretical perspectives, the interplay of science and art, and recent technological developments for artistic applications, especially in the realm of sound
    corecore