1,028,234 research outputs found

    Aspect-oriented fault tolerance for real-time embedded systems

    Get PDF
    Real-time embedded systems for safety-critical applications have to introduce fault tolerance mechanisms in order to cope with hardware and software errors. Fault tolerance is usually applied by means of redundancy and diversity. Redundant hardware implies the establishment of a distributed system executing a set of fault tolerance strategies by software, and may also employ some form of diversity, by using different variants or versions for the same processing. This paper describes our approach to introduce fault tolerance in distributed embedded systems applications, using aspect-oriented programming (AOP). A real-time operating system sup-porting middleware thread communication was integrated to a fault tolerant framework. The introduction of fault tolerance in the system is performed by AOP at the application thread level. The advantages of this approach include higher modularization, less efforts for legacy systems evolution and better configurability for testing and product line development. This work has been tested and evaluated successfully in several fault tolerant configurations and presented no significant performance or memory footprint costs.Fundação para a Ciência e a Tecnologia (FCT

    Adaptive On-the-Fly Changes in Distributed Processing Pipelines

    Get PDF
    Distributed data processing systems have become the standard means for big data analytics. These systems are based on processing pipelines where operations on data are performed in a chain of consecutive steps. Normally, the operations performed by these pipelines are set at design time, and any changes to their functionality require the applications to be restarted. This is not always acceptable, for example, when we cannot afford downtime or when a long-running calculation would lose significant progress. The introduction of variation points to distributed processing pipelines allows for on-the-fly updating of individual analysis steps. In this paper, we extend such basic variation point functionality to provide fully automated reconfiguration of the processing steps within a running pipeline through an automated planner. We have enabled pipeline modeling through constraints. Based on these constraints, we not only ensure that configurations are compatible with type but also verify that expected pipeline functionality is achieved. Furthermore, automating the reconfiguration process simplifies its use, in turn allowing users with less development experience to make changes. The system can automatically generate and validate pipeline configurations that achieve a specified goal, selecting from operation definitions available at planning time. It then automatically integrates these configurations into the running pipeline. We verify the system through the testing of a proof-of-concept implementation. The proof of concept also shows promising results when reconfiguration is performed frequently

    Model-Based Performance Anticipation in Multi-tier Autonomic Systems: Methodology and Experiments

    Get PDF
    http://www.thinkmind.org/download.php?articleid=netser_v3_n34_2010_3International audienceThis paper advocates for the introduction of perfor- mance awareness in autonomic systems. Our goal is to introduce performance prediction of a possible target configuration when a self-* feature is planning a system reconfiguration. We propose a global and partially automated process based on queues and queuing networks modelling. This process includes decomposing a distributed application into black boxes, identifying the queue model for each black box and assembling these models into a queuing network according to the candidate target configuration. Finally, performance prediction is performed either through simulation or analysis. This paper sketches the global process and focuses on the black box model identification step. This step is automated thanks to a load testing platform enhanced with a workload control loop. Model identification is based on statistical tests. The identified models are then used in performance prediction of autonomic system configurations. This paper describes the whole process through a practical experiment with a multi-tier application

    Model-Based Performance Anticipation in Multi-tier Autonomic Systems: Methodology and Experiments

    No full text
    http://www.thinkmind.org/download.php?articleid=netser_v3_n34_2010_3International audienceThis paper advocates for the introduction of perfor- mance awareness in autonomic systems. Our goal is to introduce performance prediction of a possible target configuration when a self-* feature is planning a system reconfiguration. We propose a global and partially automated process based on queues and queuing networks modelling. This process includes decomposing a distributed application into black boxes, identifying the queue model for each black box and assembling these models into a queuing network according to the candidate target configuration. Finally, performance prediction is performed either through simulation or analysis. This paper sketches the global process and focuses on the black box model identification step. This step is automated thanks to a load testing platform enhanced with a workload control loop. Model identification is based on statistical tests. The identified models are then used in performance prediction of autonomic system configurations. This paper describes the whole process through a practical experiment with a multi-tier application

    Scaling a Kubernetes Cluster

    Get PDF
    Kubernetes is a container orchestration tool that has become widely adopted for deploying and scaling containers. Devatus Oy as well as their subsidiary company Fliq Oy are interested in knowing how containerized applications can be scaled on Kubernetes. The objective of this the-sis is to research how a Kubernetes cluster can be scaled as well as containerized applications running on Kubernetes. This thesis begins with an introduction to necessary background knowledge needed to under-stand what Kubernetes is. Cloud computing and distributed systems are introduced, since Ku-bernetes is a distributed system used in cloud environments for the most part. Furthermore, distributed applications and workloads are introduced through the concept of microservices. The concept of containerizing applications is thoroughly introduced to understand the runtime environment of the applications deployed to Kubernetes. Finally, Kubernetes architecture as well as its main components are introduced to understand how container orchestration works. The research on Kubernetes scalability is divided into three different parts. First part consists of researching how containerized applications can be scaled on Kubernetes. Second part is focused on how the Kubernetes cluster itself can be scaled. The final part consists of load testing one of Fliq’s example REST API applications deployed to a local Kubernetes cluster. The purpose of load testing is to gain further insight into scaling applications running on Kubernetes. Load test results are compared between the initial deployment configurations and after scaling the application. The load test results show that containerized applications can be scaled both vertically and hor-izontally. Vertical scaling can be achieved by increasing the requested and limited CPU and RAM resources for a Pod. Horizontal scaling can be achieved by increasing Pod replicas as well as having a Service in front of the Pods that load balances the incoming traffic. Load test results show that both vertical and horizontal scaling can increase the number of users supported by an application deployed to Kubernetes. Scaling horizontally is preferred for Fliq’s example REST API since it decreased average response time and increased throughput

    Scenarios-based testing of systems with distributed ports

    Get PDF
    Copyright @ 2011 John Wiley & SonsDistributed systems are usually composed of several distributed components that communicate with their environment through specific ports. When testing such a system we separately observe sequences of inputs and outputs at each port rather than a global sequence and potentially cannot reconstruct the global sequence that occurred. Typically, the users of such a system cannot synchronise their actions during use or testing. However, the use of the system might correspond to a sequence of scenarios, where each scenario involves a sequence of interactions with the system that, for example, achieves a particular objective. When this is the case there is the potential for there to be a significant delay between two scenarios and this effectively allows the users of the system to synchronise between scenarios. If we represent the specification of the global system by using a state-based notation, we say that a scenario is any sequence of events that happens between two of these operations. We can encode scenarios in two different ways. The first approach consists of marking some of the states of the specification to denote these synchronisation points. It transpires that there are two ways to interpret such models and these lead to two implementation relations. The second approach consists of adding a set of traces to the specification to represent the traces that correspond to scenarios. We show that these two approaches have similar expressive power by providing an encoding from marked states to sets of traces. In order to assess the appropriateness of our new framework, we show that it represents a conservative extension of previous implementation relations defined in the context of the distributed test architecture: if we onsider that all the states are marked then we simply obtain ioco (the classical relation for single-port systems) while if no state is marked then we obtain dioco (our previous relation for multi-port systems). Finally, we concentrate on the study of controllable test cases, that is, test cases such that each local tester knows exactly when to apply inputs. We give two notions of controllable test cases, define an implementation relation for each of these notions, and relate them. We also show how we can decide whether a test case satisfies these conditions.Research partially supported by the Spanish MEC project TESIS (TIN2009-14312-C02-01), the UK EPSRC project Testing of Probabilistic and Stochastic Systems (EP/G032572/1), and the UCM-BSCH programme to fund research groups (GR58/08 - group number 910606)

    Requirements to Testing of Power System Services Provided by DER Units

    Get PDF
    The present report forms the Project Deliverable ‘D 2.2’ of the DERlab NoE project, supported by the EC under Contract No. SES6-CT-518299 NoE DERlab. The present document discuss the power system services that may be provided from DER units and the related methods to test the services actually provided, both at component level and at system level

    International White Book on DER Protection : Review and Testing Procedures

    Get PDF
    This white book provides an insight into the issues surrounding the impact of increasing levels of DER on the generator and network protection and the resulting necessary improvements in protection testing practices. Particular focus is placed on ever increasing inverter-interfaced DER installations and the challenges of utility network integration. This white book should also serve as a starting point for specifying DER protection testing requirements and procedures. A comprehensive review of international DER protection practices, standards and recommendations is presented. This is accompanied by the identifi cation of the main performance challenges related to these protection schemes under varied network operational conditions and the nature of DER generator and interface technologies. Emphasis is placed on the importance of dynamic testing that can only be delivered through laboratory-based platforms such as real-time simulators, integrated substation automation infrastructure and fl exible, inverter-equipped testing microgrids. To this end, the combination of fl exible network operation and new DER technologies underlines the importance of utilising the laboratory testing facilities available within the DERlab Network of Excellence. This not only informs the shaping of new protection testing and network integration practices by end users but also enables the process of de-risking new DER protection technologies. In order to support the issues discussed in the white paper, a comparative case study between UK and German DER protection and scheme testing practices is presented. This also highlights the level of complexity associated with standardisation and approval mechanisms adopted by different countries

    European White Book on Real-Time Power Hardware in the Loop Testing : DERlab Report No. R- 005.0

    Get PDF
    The European White Book on Real-Time-Powerhardware-in-the-Loop testing is intended to serve as a reference document on the future of testing of electrical power equipment, with specifi c focus on the emerging hardware-in-the-loop activities and application thereof within testing facilities and procedures. It will provide an outlook of how this powerful tool can be utilised to support the development, testing and validation of specifi cally DER equipment. It aims to report on international experience gained thus far and provides case studies on developments and specifi c technical issues, such as the hardware/software interface. This white book compliments the already existing series of DERlab European white books, covering topics such as grid-inverters and grid-connected storag
    corecore