13,924 research outputs found

    Dynamic Model-based Management of Service-Oriented Infrastructure.

    Get PDF
    Models are an effective tool for systems and software design. They allow software architects to abstract from the non-relevant details. Those qualities are also useful for the technical management of networks, systems and software, such as those that compose service oriented architectures. Models can provide a set of well-defined abstractions over the distributed heterogeneous service infrastructure that enable its automated management. We propose to use the managed system as a source of dynamically generated runtime models, and decompose management processes into a composition of model transformations. We have created an autonomic service deployment and configuration architecture that obtains, analyzes, and transforms system models to apply the required actions, while being oblivious to the low-level details. An instrumentation layer automatically builds these models and interprets the planned management actions to the system. We illustrate these concepts with a distributed service update operation

    A Tale of Two Data-Intensive Paradigms: Applications, Abstractions, and Architectures

    Full text link
    Scientific problems that depend on processing large amounts of data require overcoming challenges in multiple areas: managing large-scale data distribution, co-placement and scheduling of data with compute resources, and storing and transferring large volumes of data. We analyze the ecosystems of the two prominent paradigms for data-intensive applications, hereafter referred to as the high-performance computing and the Apache-Hadoop paradigm. We propose a basis, common terminology and functional factors upon which to analyze the two approaches of both paradigms. We discuss the concept of "Big Data Ogres" and their facets as means of understanding and characterizing the most common application workloads found across the two paradigms. We then discuss the salient features of the two paradigms, and compare and contrast the two approaches. Specifically, we examine common implementation/approaches of these paradigms, shed light upon the reasons for their current "architecture" and discuss some typical workloads that utilize them. In spite of the significant software distinctions, we believe there is architectural similarity. We discuss the potential integration of different implementations, across the different levels and components. Our comparison progresses from a fully qualitative examination of the two paradigms, to a semi-quantitative methodology. We use a simple and broadly used Ogre (K-means clustering), characterize its performance on a range of representative platforms, covering several implementations from both paradigms. Our experiments provide an insight into the relative strengths of the two paradigms. We propose that the set of Ogres will serve as a benchmark to evaluate the two paradigms along different dimensions.Comment: 8 pages, 2 figure

    Size Matters: Microservices Research and Applications

    Full text link
    In this chapter we offer an overview of microservices providing the introductory information that a reader should know before continuing reading this book. We introduce the idea of microservices and we discuss some of the current research challenges and real-life software applications where the microservice paradigm play a key role. We have identified a set of areas where both researcher and developer can propose new ideas and technical solutions.Comment: arXiv admin note: text overlap with arXiv:1706.0735

    Domain-Type-Guided Refinement Selection Based on Sliced Path Prefixes

    Full text link
    Abstraction is a successful technique in software verification, and interpolation on infeasible error paths is a successful approach to automatically detect the right level of abstraction in counterexample-guided abstraction refinement. Because the interpolants have a significant influence on the quality of the abstraction, and thus, the effectiveness of the verification, an algorithm for deriving the best possible interpolants is desirable. We present an analysis-independent technique that makes it possible to extract several alternative sequences of interpolants from one given infeasible error path, if there are several reasons for infeasibility in the error path. We take as input the given infeasible error path and apply a slicing technique to obtain a set of error paths that are more abstract than the original error path but still infeasible, each for a different reason. The (more abstract) constraints of the new paths can be passed to a standard interpolation engine, in order to obtain a set of interpolant sequences, one for each new path. The analysis can then choose from this set of interpolant sequences and select the most appropriate, instead of being bound to the single interpolant sequence that the interpolation engine would normally return. For example, we can select based on domain types of variables in the interpolants, prefer to avoid loop counters, or compare with templates for potential loop invariants, and thus control what kind of information occurs in the abstraction of the program. We implemented the new algorithm in the open-source verification framework CPAchecker and show that our proof-technique-independent approach yields a significant improvement of the effectiveness and efficiency of the verification process.Comment: 10 pages, 5 figures, 1 table, 4 algorithm

    Towards ontology based event processing

    Get PDF

    Sciduction: Combining Induction, Deduction, and Structure for Verification and Synthesis

    Full text link
    Even with impressive advances in automated formal methods, certain problems in system verification and synthesis remain challenging. Examples include the verification of quantitative properties of software involving constraints on timing and energy consumption, and the automatic synthesis of systems from specifications. The major challenges include environment modeling, incompleteness in specifications, and the complexity of underlying decision problems. This position paper proposes sciduction, an approach to tackle these challenges by integrating inductive inference, deductive reasoning, and structure hypotheses. Deductive reasoning, which leads from general rules or concepts to conclusions about specific problem instances, includes techniques such as logical inference and constraint solving. Inductive inference, which generalizes from specific instances to yield a concept, includes algorithmic learning from examples. Structure hypotheses are used to define the class of artifacts, such as invariants or program fragments, generated during verification or synthesis. Sciduction constrains inductive and deductive reasoning using structure hypotheses, and actively combines inductive and deductive reasoning: for instance, deductive techniques generate examples for learning, and inductive reasoning is used to guide the deductive engines. We illustrate this approach with three applications: (i) timing analysis of software; (ii) synthesis of loop-free programs, and (iii) controller synthesis for hybrid systems. Some future applications are also discussed
    corecore