1,114 research outputs found

    The Impact of Petri Nets on System-of-Systems Engineering

    Get PDF
    The successful engineering of a large-scale system-of-systems project towards deterministic behaviour depends on integrating autonomous components using international communications standards in accordance with dynamic requirements. To-date, their engineering has been unsuccessful: no combination of top-down and bottom-up engineering perspectives is adopted, and information exchange protocol and interfaces between components are not being precisely specified. Various approaches such as modelling, and architecture frameworks make positive contributions to system-of-systems specification but their successful implementation is still a problem. One of the most popular modelling notations available for specifying systems, UML, is intuitive and graphical but also ambiguous and imprecise. Supplying a range of diagrams to represent a system under development, UML lacks simulation and exhaustive verification capability. This shortfall in UML has received little attention in the context of system-of-systems and there are two major research issues: 1. Where the dynamic, behavioural diagrams of UML can and cannot be used to model and analyse system-of-systems 2. Determining how Petri nets can be used to improve the specification and analysis of the dynamic model of a system-of-systems specified using UML This thesis presents the strengths and weaknesses of Petri nets in relation to the specification of system-of-systems and shows how Petri net models can be used instead of conventional UML Activity Diagrams. The model of the system-of-systems can then be analysed and verified using Petri net theory. The Petri net formalism of behaviour is demonstrated using two case studies from the military domain. The first case study uses Petri nets to specify and analyse a close air support mission. This case study concludes by indicating the strengths, weaknesses, and shortfalls of the proposed formalism in system-of-systems specification. The second case study considers specification of a military exchange network parameters problem and the results are compared with the strengths and weaknesses identified in the first case study. Finally, the results of the research are formulated in the form of a Petri net enhancement to UML (mapping existing activity diagram elements to Petri net elements) to meet the needs of system-of-systems specification, verification and validation

    A web-based collaborative decision making system for construction project teams using fuzzy logic

    Get PDF
    In the construction industry, the adoption of concurrent engineering principles requires the development of effective enabling IT tools. Such tools need to address specific areas of need in the implementation of concurrent engineering in construction. Collaborative decision-making is an important area in this regard. A review of existing works has shown that none of the existing approaches to collaborative decision-making adequately addresses the needs of distributed construction project teams. The review also reveals that fuzzy logic offers great potential for application to collaborative decision-making. This thesis describes a Web-based collaborative decision-making system for construction project teams using fuzzy logic. Fuzzy logic is applied to tackle uncertainties and imprecision during the decision-making process. The prototype system is designed as Web-based to cope with the difficulty in the case where project team members are geographically distributed and physical meetings are inconvenient/or expensive. The prototype was developed into a Web-based software using Java and allows a virtual meeting to be held within a construction project team via a client-server system. The prototype system also supports objectivity in group decision-making and the approach encapsulated in the prototype system can be used for generic decision-making scenarios. The system implementation revealed that collaborative decision-making within a virtual construction project team can be significantly enhanced by the use of a fuzzybased approach. A generic scenario and a construction scenario were used to evaluate the system and the evaluation confirmed that the system does proffer many benefits in facilitating collaborative decision-making in construction. It is concluded that the prototype decision-making system represents a unique and innovative approach to collaborative decision-making in construction project teams. It not only contributes to the implementation of concurrent engineering in construction, but also it represents a substantial advance over existing approaches

    Reflective mobile middleware for context-aware applications

    Get PDF
    The increasing popularity of mobile devices, such as mobile phones and personal digital assistants, and advances in wireless networking technologies, are enabling new classes of applications that present challenging problems to application designers. Applications have to be aware of, and adapt to, variations in the execution context, such as fluctuating network bandwidth and decreasing battery power, in order to deliver a good quality of service to their users. We argue that building applications directly on top of the network operating system would be extremely tedious and error-prone, as application developers would have to deal with these issues explicitly, and would consequently be distracted from the actual requirements of the application they are building. Rather, a middleware layered between the network operating system and the application should provide application developers with abstractions and mechanisms to deal with them. We investigate the principle of reflection and demonstrate how it can be used to support context-awareness and dynamic adaptation to context changes. We offer application engineers an abstraction of middleware as a dynamically customisable service provider, where each service can be delivered using different policies when requested in different contexts. Based on this abstraction, current middleware behaviour, with respect to a particular application, is reified in an application profile, and made accessible to the application for run-time inspection and adaptation. Applications can use the meta-interface that the middleware provides to change the information encoded in their profile, thus tailoring middleware behaviour to the user's needs. However, while doing so, conflicts may arise; different users may have different quality-of-service needs, and applications, in an attempt to full these needs, may customise middleware behaviour in conflicting ways. These conflicts have to be resolved in order to allow applications to come to an agreement, and thus be able to engage successful collaborations. We demonstrate how microeconomic techniques can be used to treat these kinds of conflicts. We offer an abstraction of the mobile setting as an economy, where applications compete to have a service delivered according to their quality-of-service needs. We have designed a mechanism where middleware plays the role of the auctioneer, collecting bids from the applications and delivering the service using the policy that maximises social welfare; that is, the one that delivers, on average, the best quality-of-service. We formalise the principles discussed above, namely reflection to support context-awareness and microeconomic techniques to support conflict resolution. To demonstrate their effectiveness in fostering the development of context-aware applications, we discuss a middleware architecture and implementation (CARISMA) that embed these principles, and report on performance and usability results obtained during a thorough evaluation stage

    Adapting mobile systems using logical mobility primitives

    Get PDF
    Mobile computing devices, such as personal digital assistants and mobile phones, are becoming increasingly popular, smaller, more capable and even fashionable personal items. Combined with the recent advent of wireless networking techniques, users are equipped with mobile devices of significant computational abilities, which are able to wirelessly access information by dynamically connecting to many different networks. Despite the ubiquity of mobile devices, mobile systems are built using monolithic architectures, use a small set of predefined interaction paradigms and do not exploit or adapt to the dynamicity of their local or remote context. Applications deployed on mobile devices face considerable challenges posed by their changing surroundings. One of the main peculiarities of mobile devices is heterogeneity, which may occur in software, hardware and network protocols. Mobile systems may carry a large number of different applications, use different operating systems and middleware and, often, have more than one network interface. A further challenge is their considerable variation in the computational resources available, such as battery power, CPU speed, network bandwidth and volatile and persistent memory. Moreover, mobile computing systems are highly dynamic systems, in terms of their surroundings, implying that the requirements for applications deployed on a mobile device are a moving target. Changes in the requirements (such as integration with a new service) may require changes to the application. Consequently, these changes may mean that the application behaviour needs to adapt. This thesis argues that the potential of the ubiquity of mobile devices cannot be realised using static and monolithic architectures, as mobile systems need to be able to adapt to accommodate changes to their environment. It investigates the use of three technologies to offer adaptation to mobile devices: Logical mobility techniques, component systems and middleware technologies. More specifically, this thesis presents the SATIN (System Adaptation Targeting Integrated Networks) component metamodel, a lightweight local component metamodel that offers the flexible use of logical mobility primitives. The metamodel is instantiated to build the SATIN middleware system, a component-based mobile computing middleware that uses the mobility primitives exported by the metamodel to reconfigure itself and applications running on top of it. The suitability of SATIN for the creation of adaptable mobile systems is demonstrated, by using it to implement and evaluate a number of applications showing different aspects of adaptation. Moreover, existing projects are reengineered to run as SATIN components, showing the flexibility of the approach and the advantages gained over the originals

    Automated Unit Testing of Evolving Software

    Get PDF
    As software programs evolve, developers need to ensure that new changes do not affect the originally intended functionality of the program. To increase their confidence, developers commonly write unit tests along with the program, and execute them after a change is made. However, manually writing these unit-tests is difficult and time-consuming, and as their number increases, so does the cost of executing and maintaining them. Automated test generation techniques have been proposed in the literature to assist developers in the endeavour of writing these tests. However, it remains an open question how well these tools can help with fault finding in practice, and maintaining these automatically generated tests may require extra effort compared to human written ones. This thesis evaluates the effectiveness of a number of existing automatic unit test generation techniques at detecting real faults, and explores how these techniques can be improved. In particular, we present a novel multi-objective search-based approach for generating tests that reveal changes across two versions of a program. We then investigate whether these tests can be used such that no maintenance effort is necessary. Our results show that overall, state-of-the-art test generation tools can indeed be effective at detecting real faults: collectively, the tools revealed more than half of the bugs we studied. We also show that our proposed alternative technique that is better suited to the problem of revealing changes, can detect more faults, and does so more frequently. However, we also find that for a majority of object-oriented programs, even a random search can achieve good results. Finally, we show that such change-revealing tests can be generated on demand in practice, without requiring them to be maintained over time

    Adaptive heterogeneous parallelism for semi-empirical lattice dynamics in computational materials science.

    Get PDF
    With the variability in performance of the multitude of parallel environments available today, the conceptual overhead created by the need to anticipate runtime information to make design-time decisions has become overwhelming. Performance-critical applications and libraries carry implicit assumptions based on incidental metrics that are not portable to emerging computational platforms or even alternative contemporary architectures. Furthermore, the significance of runtime concerns such as makespan, energy efficiency and fault tolerance depends on the situational context. This thesis presents a case study in the application of both Mattsons prescriptive pattern-oriented approach and the more principled structured parallelism formalism to the computational simulation of inelastic neutron scattering spectra on hybrid CPU/GPU platforms. The original ad hoc implementation as well as new patternbased and structured implementations are evaluated for relative performance and scalability. Two new structural abstractions are introduced to facilitate adaptation by lazy optimisation and runtime feedback. A deferred-choice abstraction represents a unified space of alternative structural program variants, allowing static adaptation through model-specific exhaustive calibration with regards to the extrafunctional concerns of runtime, average instantaneous power and total energy usage. Instrumented queues serve as mechanism for structural composition and provide a representation of extrafunctional state that allows realisation of a market-based decentralised coordination heuristic for competitive resource allocation and the Lyapunov drift algorithm for cooperative scheduling

    User interfaces in space science instrumentation

    Get PDF
    This thesis examines user interaction with instrumentation in the specific context of space science. It gathers together existing practice in machine interfaces with a look at potential future usage and recommends a new approach to space science projects with the intention of maximising their science return. It first takes a historical perspective on user interfaces and ways of defining and measuring the science return of a space instrument. Choices of research methodology are considered. Implementation details such as the concepts of usability, mental models, affordance and presentation of information are described, and examples of existing interfaces in space science are given. A set of parameters for use in analysing and synthesizing a user interface is derived by using a set of case studies of diverse failures and from previous work. A general space science user analysis is made by looking at typical practice, and an interview plus persona technique is used to group users with interface designs. An examination is made of designs in the field of astronomical instrumentation interfaces, showing the evolution of current concepts and including ideas capable of sustaining progress in the future. The parameters developed earlier are then tested against several established interfaces in the space science context to give a degree of confidence in their use. The concept of a simulator that is used to guide the development of an instrument over the whole lifecycle is described, and the idea is proposed that better instrumentation would result from more efficient use of the resources available. The previous ideas in this thesis are then brought together to describe a proposed new approach to a typical development programme, with an emphasis on user interaction. The conclusion shows that there is significant room for improvement in the science return from space instrumentation by attention to the user interface

    Wiki-health: from quantified self to self-understanding

    Get PDF
    Today, healthcare providers are experiencing explosive growth in data, and medical imaging represents a significant portion of that data. Meanwhile, the pervasive use of mobile phones and the rising adoption of sensing devices, enabling people to collect data independently at any time or place is leading to a torrent of sensor data. The scale and richness of the sensor data currently being collected and analysed is rapidly growing. The key challenges that we will be facing are how to effectively manage and make use of this abundance of easily-generated and diverse health data. This thesis investigates the challenges posed by the explosive growth of available healthcare data and proposes a number of potential solutions to the problem. As a result, a big data service platform, named Wiki-Health, is presented to provide a unified solution for collecting, storing, tagging, retrieving, searching and analysing personal health sensor data. Additionally, it allows users to reuse and remix data, along with analysis results and analysis models, to make health-related knowledge discovery more available to individual users on a massive scale. To tackle the challenge of efficiently managing the high volume and diversity of big data, Wiki-Health introduces a hybrid data storage approach capable of storing structured, semi-structured and unstructured sensor data and sensor metadata separately. A multi-tier cloud storage system—CACSS has been developed and serves as a component for the Wiki-Health platform, allowing it to manage the storage of unstructured data and semi-structured data, such as medical imaging files. CACSS has enabled comprehensive features such as global data de-duplication, performance-awareness and data caching services. The design of such a hybrid approach allows Wiki-Health to potentially handle heterogeneous formats of sensor data. To evaluate the proposed approach, we have developed an ECG-based health monitoring service and a virtual sensing service on top of the Wiki-Health platform. The two services demonstrate the feasibility and potential of using the Wiki-Health framework to enable better utilisation and comprehension of the vast amounts of sensor data available from different sources, and both show significant potential for real-world applications.Open Acces

    Dynamic analysis for concurrent modern C/C++ applications

    Get PDF
    Concurrent programs are executed by multiple threads that run simultaneously. While this allows programs to run more efficiently by utilising multiple processors, it brings with it numerous complications. For example, a program may behave unpredictably or erroneously when multiple threads modify the same memory location in an uncoordinated manner. Issues such as this are difficult to avoid, and when introduced, can break the program in unpredictable ways. Programmers will therefore often turn towards automated tools to aide in the detection of concurrency bugs. The work presented in this thesis aims to provide methods to aid in the creation of tools for the purpose of finding and explaining concurrency bugs. In particular, the following studies have been conducted: Dynamic Race Detection for C/C++11 With the introduction of a weak memory model in C++, many tools that provide dynamic race detection have become outdated, and are unable to adequately identify data races. This work updates an existing data race detection algorithm such that it can identify data races according to this new definition. A method for allowing programs to explore many of the weak behaviours that this new memory model permits is also provided. Record and Replay Much work has gone into record and replay, however, most of this work is focussed on whole system replay, whereby a tool will aim to record as much of the program execution as possible. Contrasting this, the work presented here aims to record as little as possible. This sparse approach has many interesting implications: some programs that were previously out of reach for record and reply become tractable, and vice versa. To back this up, controlled scheduling is introduced that is capable of applying different scheduling strategies, which combined with the record and replay is beneficial for helping to root out bugs. Tool Support Both of the above techniques have been implemented in a tool, tsan11rec, that builds on the tsan dynamic race detection tool. A large experimental evaluation is presented investigating the effectiveness of the enhanced data race detection algorithm when applied to the Firefox and Chromium web browsers, and of the novel approach to record and replay when applied to a diverse set of concurrent applications.Open Acces
    corecore