669 research outputs found

    Design, modelling, simulation and integration of cyber physical systems: Methods and applications

    Get PDF
    The main drivers for the development and evolution of Cyber Physical Systems (CPS) are the reduction of development costs and time along with the enhancement of the designed products. The aim of this survey paper is to provide an overview of different types of system and the associated transition process from mechatronics to CPS and cloud-based (IoT) systems. It will further consider the requirement that methodologies for CPS-design should be part of a multi-disciplinary development process within which designers should focus not only on the separate physical and computational components, but also on their integration and interaction. Challenges related to CPS-design are therefore considered in the paper from the perspectives of the physical processes, computation and integration respectively. Illustrative case studies are selected from different system levels starting with the description of the overlaying concept of Cyber Physical Production Systems (CPPSs). The analysis and evaluation of the specific properties of a sub-system using a condition monitoring system, important for the maintenance purposes, is then given for a wind turbine

    Dynamic Distributed Simulation of DEVS Models on the OSGi Service Platform

    Get PDF
    Interoperability among simulators is one of the key factors in distributed simulations. Several interoperability infrastructures such as HLA and DEVS/SOA have been utilised, but most of them do not provide any dynamics. This paper introduces the use of the OSGi service platform as universal middleware for dynamic distributed simulation of DEVS models. We have designed and implemented the DEVS/OSGi simulation framework, which is an approach similar to DEVS/SOA, but relies on an integrated service-oriented and protocol independent architecture. It enables standardized plug-and-play capabilities and dynamic reconfiguration within distributed simulations. The architecture and implementation has been validated in an analytical context against a traffic simulation model. We conclude that the standardised interoperability and run-time dynamics provided by the OSGi service platform are highly valuable for distributed simulations

    Perpetual requirements engineering

    Get PDF
    This dissertation attempts to make a contribution within the fields of distributed systems, security, and formal verification. We provide a way to formally assess the impact of a given change in three different contexts. We have developed a logic based on Lewis’s Counterfactual Logic. First we show how our approach is applied to a standard sequential programming setting. Then, we show how a modified version of the logic can be used in the context of reactive systems and sensor networks. Last but not least we show how this logic can be used in the context of security systems. Traditionally, change impact analysis has been viewed as an area in traditional software engineering. Software artifacts (source code, usually) are modified in response to a change in user requirements. Aside from making sure that the changes are inherently correct (testing and verification), programmers (software engineers) need to make sure that the introduced changes are coherent with those parts of the systems that were not affected by the artifact modification. The latter is generally achieved by establishing a dependency relation between software artifacts. In rough lines, the process of change management consists of projecting the transitive closure of the this dependency relation based on the set of artifacts that have actually changed and assessing how the related artifacts changed. The latter description of the traditional change management process generally occurs after the affected artifacts are changed. Undesired secondary effects are usually found during the testing phase after the changes have been incorporated. In cases when there is certain level of criticality, there is always a division between production and development environments. Change management (either automatic, tool driven, or completely manually done) can introduce extraneous defects into any of the changed software life-cycle artifacts. The testing phase tries to eradicate a relatively large portion of the undesired defects introduced by change. However, traditional testing techniques are limited by their coverage strength. Therefore, even when maximum coverage is guaranteed there is always the non-zero probability of having secondary effects prior to a change

    A Lightweight and Flexible Mobile Agent Platform Tailored to Management Applications

    Full text link
    Mobile Agents (MAs) represent a distributed computing technology that promises to address the scalability problems of centralized network management. A critical issue that will affect the wider adoption of MA paradigm in management applications is the development of MA Platforms (MAPs) expressly oriented to distributed management. However, most of available platforms impose considerable burden on network and system resources and also lack of essential functionality. In this paper, we discuss the design considerations and implementation details of a complete MAP research prototype that sufficiently addresses all the aforementioned issues. Our MAP has been implemented in Java and tailored for network and systems management applications.Comment: 7 pages, 5 figures; Proceedings of the 2006 Conference on Mobile Computing and Wireless Communications (MCWC'2006

    Modern Distribution Management System and Voltage VAR Control

    Get PDF
    This paper describes modern Distribution Management System (DMS) and Voltage/VAR Control (VVC) as one of its important components. Importance of DMS with respect to latest changes such as renewable energy sources, distribution generation, demand-respond is significant for the complete power system stability and control. In this paper VVC, as one of the most important applications in DMS, is explained and analyzed. VVC uses power system control equipment and calculates new optimal operational state. Typical VVC objective function is minimization of system power losses, violations of bus voltage limits, feeder capacity limits or combinations of these. Changes of controllable devices are presented through their injected current used in current iteration method for power flow. Test of Voltage/VAR control is performed on modified IEEE13 test network and results show that proper adjustments of OLTC transformers, capacitors and DG significantly reducepower losses while satisfying all operation constraints

    CRISTAL: A practical study in designing systems to cope with change

    Get PDF
    Software engineers frequently face the challenge of developing systems whose requirements are likely to change in order to adapt to organizational reconfigurations or other external pressures. Evolving requirements present difficulties, especially in environments in which business agility demands shorter development times and responsive prototyping. This paper uses a study from CERN in Geneva to address these research questions by employing a 'description-driven' approach that is responsive to changes in user requirements and that facilitates dynamic system reconfiguration. The study describes how handling descriptions of objects in practice alongside their instances (making the objects self-describing) can mediate the effects of evolving user requirements on system development. This paper reports on and draws lessons from the practical use of a description-driven system over time. It also identifies lessons that can be learned from adopting such a self-describing description-driven approach in future software development. © 2014 Elsevier Ltd
    • …
    corecore