21 research outputs found

    Creating Custom Containers with Generative Techniques

    No full text
    Component containers are a key part of mainstream component technologies, and play an important role in separating nonfunctional concerns from the core component logic. This paper addresses two different aspects of containers. First, it shows how generative programming techniques, using AspectC++ and metaprogramming, can be used to generate stubs and skeletons without the need for special compilers or interface description languages. Second, the paper describes an approach to create custom containers by composing different non-functional features. Unlike component technologies such as EJB, which only support a predefined set of container types, this approach allows different combinations of non-functional features to be composed in a container to meet the application needs

    Adaptation Timing in Self-Adaptive Systems

    No full text
    <p>Software-intensive systems are increasingly expected to operate under changing and uncertain conditions, including not only varying user needs and workloads, but also fluctuating resource capacity. Self-adaptation is an approach that aims to address this problem, giving systems the ability to change their behavior and structure to adapt to changes in themselves and their operating environment without human intervention. Self-adaptive systems tend to be reactive and myopic, adapting in response to changes without anticipating what the subsequent adaptation needs will be. Adapting reactively can result in inefficiencies due to the system performing a suboptimal sequence of adaptations. Furthermore, some adaptation tactics—atomic adaptation actions that leave the system in a consistent state—have latency and take some time to produce their effect. In that case, reactive adaptation causes the system to lag behind environment changes. What is worse, a long running adaptation action may prevent the system from performing other adaptations until it completes, further limiting its ability to effectively deal with the environment changes. To address these limitations and improve the effectiveness of self-adaptation, we present proactive latency-aware adaptation, an approach that considers the timing of adaptation (i) leveraging predictions of the near future state of the environment to adapt proactively; (ii) considering the latency of adaptation tactics when deciding how to adapt; and (iii) executing tactics concurrently. We have developed three different solution approaches embodying these principles. One is based on probabilistic model checking, making it inherently able to deal with the stochastic behavior of the environment, and guaranteeing optimal adaptation choices over a finite decision horizon. The second approach uses stochastic dynamic programming to make adaptation decisions, and thanks to performing part of the computations required to make those decisions off-line, it achieves a speedup of an order of magnitude over the first solution approach without compromising optimality. A third solution approach makes adaptation decisions based on repertoires of adaptation strategies— predefined compositions of adaptation tactics. This approach is more scalable than the other two because the solution space is smaller, allowing an adaptive system to reap some of the benefits of proactive latency-aware adaptation even if the number of ways in which it could adapt is too large for the other approaches to consider all these possibilities. We evaluate the approach using two different classes of systems with different adaptation goals, and different repertoires of adaptation strategies. One of them is a web system, with the adaptation goal of utility maximization. The other is a cyberphysical system operating in a hostile environment. In that system, self-adaptation must not only maximize the reward gained, but also keep the probability of surviving a mission above a threshold. In both cases, our results show that proactive latency-aware adaptation improves the effectiveness of self-adaptation with respect to reactive time-agnostic adaptation.</p

    Model-Driven Performance Analysis

    No full text
    Model-Driven Engineering (MDE) is an approach to develop software systems by creating models and applying automated transformations to them to ultimately generate the implementation for a target platform. Although the main focus of MDE is on the generation of code, it is also necessary to support the analysis of the designs with respect to quality attributes such as performance. To complement the model-to-implementation path of MDE approaches, an MDE tool infrastructure should provide what we call model-driven analysis. This paper describes an approach to model-driven analysis based on reasoning frameworks. In particular, it describes a performance reasoning framework that can transform a design into a model suitable for analysis of real-time performance properties with different evaluation procedures including rate monotonic analysis and simulation. The concepts presented in this paper have been implemented in the PACC Starter Kit, a development environment that supports code generation and analysis from the same models

    Overview of the Lambda-* Performance Reasoning Frameworks

    No full text
    The Predictable Assembly from Certifiable Code (PACC) Initiative at the Carnegie Mellon Software Engineering Institute is developing methods and technologies to enable the production of software with predictable behavior by making the application of analytic methods accessible to software engineering practitioners. The use of reasoning frameworks is a means to achieving this goal. A reasoning framework is a packaging of an analysis theory along with other important elements that are needed for its application, such as methods for creating analysis models and evaluating them. Lambda-* is a suite of performance reasoning frameworks founded on the principles of Generalized Rate Monotonic Analysis (GRMA) for predicting the average and worst-case latency of periodic and stochastic tasks in real-time systems. Lambda-* can be applied to many different, uniprocessor, real-time systems having a mix of tasks with hard and soft deadlines with periodic and stochastic event interarrivals. Some examples include embedded control systems (e.g., avionic, automotive, robotic) and multimedia systems (e.g., audio mixing). This report provides an overview of the Lambda-* performance reasoning frameworks, their current capabilities, and ongoing research. The Lambda-* reasoning frameworks have been implemented as a part of the PACC Starter Kit (PSK), a development environment that integrates a collection of technologies to enable the development of software with predictable runtime behavior

    An Optimal Real-Time Voltage and Frequency Scaling for Uniform Multiprocessors

    No full text
    Power consumption is an increasing concern in real-time systems that operate on battery power or require heat dissipation to keep the system at its operating temperature. Today, most processors allow software to change their frequency and voltage of operation to reduce their power consumption. Frequency scaling in real-time systems must be done in a way that ensures that the tasks' deadlines are met. In this paper we present the <em>Growing Minimum Frequency</em> (GMF) algorithm for voltage and frequency scaling in uniform multiprocessors for real-time systems. This algorithm runs in polynomial time and computes the optimal voltage and frequency assignment, achieving better power efficiency than previous algorithms. We present the optimality proof and evaluate the practical improvement over previous algorithms with simulated tasksets. Our evaluation shows up to 30% power efficiency improvement over previous algorithms

    Statistical-Based WCET Estimation and Validation

    No full text
    In this paper we present a measurement-based approach that produces both a WCET (Worst Case Execution Time) estimate, and a prediction of the probability that a future execution time will exceed our estimate. Our statistical-based approach uses extreme value theory to build a model of the tail behavior of the measured execution time value. We validate our approach using an industrial data set comprised of over 150 sampled components and nearly 200 million sample execution times. Each trace is divided into two segments, with one used to make the WCET estimate, and the second used check our prediction of the fraction of future execution time samples that exceed our WCET estimate. We show that compared to WCET estimates derived from the worst-case observed time, our WCET estimates significantly improve the ability to predict the probability that our WCET estimate is exceeded

    Applicability of General Scenarios to the Architecture Tradeoff Analysis Method

    No full text
    The SEI has been developing a list of scenarios to characterize quality attributes. The SEI has also been conducting Architecture Tradeoff Analysis Method (ATAM) evaluations. One output of an ATAM evaluation is a collection of scenarios that relate to quality attribute requirements for the specific system being evaluated. In this report, we compare the scenarios elicited from five ATAM evaluations with the scenarios used to characterize the quality attributes. This effort was designed to validate the coverage of the existing set of general scenarios and to analyze trends in the risks uncovered in ATAM reports

    Using Containers to Enforce Smart Constraints for Performance in Industrial Systems

    No full text
    Today, software engineering is concerned less with individual programs than with large-scale networks of interacting programs. For large-scale networks, engineering problems emerge that go well beyond functional correctness (the purview of programming) and encompass equally crucial nonfunctional qualities such as security, performance, availability, and fault tolerance. A pivotal challenge, then, is to provide techniques to routinely construct systems that have predictable nonfunctional quality. These techniques impose constraints on the problem being solved and on the form solutions can take. This technical note shows how smart constraints can be embedded in software infrastructure, so that systems conforming to those constraints are predictable by construction

    Statistical Models for Empirical Component Properties and Assembly-Level Property Predictions: Toward Standard Labeling

    No full text
    One risk inherent in the use of software components has been that the behavior of assemblies of components is discovered only after their integration. The objective of our work is to enable designers to use known (and certified) component properties as parameters to models that can be used to predict assembly-level properties. Our concern in this paper is with empirical component properties and compositional reasoning, rather than formal properties and reasoning. Empirical component properties must be measured; assessing the effectiveness of predictions based on these properties also involves measurement. This, in turn, introduces systematic and random measurement error. As a consequence, statistical models are needed to describe empirical component properties and predictions. In this position paper, we identify the statistical models that we have found useful in our research, and which we believe can form a basis for standard industry labels for component properties and prediction theories

    Enabling Predictable Assembly

    No full text
    Demands for increased functionality, better quality, and faster time-to-market in software products continue to increase. Component-based development is the software industry’s response to these demands. The industry has developed technologies such as EJB and CORBA to assemble components that are created in isolation. Component technologies available today allow designers to plug components together, but do little to allow the developer to reason about how well they will play together. Predictable assembly focuses on issues related to assembling component-based systems that predictably meet their quality attribute requirements. This paper introduces prediction-enabled component technology (PECT) as a means of packaging predictable assembly as a deployable product. A PECT is the integration of a component technology with one or more analysis technologies. Analysis technologies support prediction of assembly properties and also identify required component properties and their certifiable descriptions. This report describes the major structures of a PECT. It then discusses the means of validating the predictive powers of a PECT, which provides measurably bounded trust in design-time predictions. Last, it demonstrates the above concepts in an illustrative model problem: predicting average end-to-end latency of a ‘soft’ real time application built from off-the-shelf software components
    corecore