173 research outputs found
A Generic Model Driven Methodology for Extending Component Models
Software components have interesting properties for the development of scientific applications such as easing code reuse and code coupling. In classical component models, component assemblies are however still tightly coupled with the execution resources they are targeted to. Dedicated concepts to abstract assemblies from resources and to enable high performance component implementations have thus been proposed. These concepts have not achieved widespread use, mainly because of the lack of suitable approach to extend component models. Existing approaches -- based on ad-hoc modifications of component run-times or compilation chains -- are complex, difficult to port from one implementation to another and prevent mixing of distinct extensions in a single model. An interesting trend to separate application logic from the underlying execution resources exists; it is based on meta-modeling and on the manipulation of the resulting models. This report studies how a model driven approach could be applied to implement abstract concepts in component models. The proposed approach is based on a two step transformation from an abstract model to a concrete one. In the first step, all abstract concepts of the source model are rewritten using the limited set of abstract concepts of an intermediate model. In the second step, resources are taken into account to transform these intermediate concepts into concrete ones. A prototype implementation is described to evaluate the feasibility of this approach
Model-driven Fault-Tolerance Provisioning for Component-based Distributed Real-time Embedded Systems
Recommended from our members
Analysis and Development of Instrument Software Paradigms: Conception and Implementation of a New Instrument Control and Data Acquisition System, Proven by Material Scientific Applications
During the last 50 years, the quality of analysis methods in many scientific disciplines has been enhanced by electronic applications, automation and data processing. While the features, performance and usability of these processes have been continually enhanced, it is conspicuous that the majority of institutes operate own proprietary software. This situation arises for both historical and financial reasons, plus a wish to retain autonomy fuelled by the requirement for a system that remains compatible with both new and legacy hardware.
This thesis reviews the commonly used scientific software systems and their stakeholders and tries to identify generic problems. The demands on instrument systems are summarized by a requirement specification. Based on these requirements, a basic concept is developed that reflects the current state-of-the art in software design and which may provide a blueprint for instrument system architectures. The results are used to create a proof-of-concept implementation. Core to this approach is an application server that comes with a container, which makes use of the Inversion-of-Control pattern to loosely couple and execute components. These do not need to implement fixed interfaces and are thus decoupled from a specific use-case. Components can, for example, be proxies that control and acquire data from legacy hardware, perform calculations, provide a human-machine interface or act as storage. They are dynamically wired to experiments using XML-based Assembly files. Both Assemblies and Components can be published using a central store on a collaboration platform and shared by the community. This increases reusability and allows the use of existing Assemblies with new hardware by simply replacing the hardware proxy modules.
Example components have been provided for the access to legacy and new instrument hardware, the storage of results in the NeXus format, data reduction, simulation with McStas, the execution of customizable scans and the visualization of data
Component-based control system development for agile manufacturing machine systems
It is now a common sense that manufactures including machine suppliers and system
integrators of the 21 st century will need to compete on global marketplaces, which are
frequently shifting and fragmenting, with new technologies continuously emerging.
Future production machines and manufacturing systems need to offer the "agility"
required in providing responsiveness to product changes and the ability to
reconfigure. The primary aim for this research is to advance studies in machine
control system design, in the context of the European project VIR-ENG - "Integrated
Design, Simulation and Distributed Control of Agile Modular Machinery"
Design-time performance analysis of component-based real-time systems
In current real-time systems, performance metrics are one of the most challenging properties to specify, predict and measure. Performance properties depend on various factors, like environmental context, load profile, middleware, operating system, hardware platform and sharing of internal resources. Performance failures and not satisfying related requirements cause delays, cost overruns, and even abandonment of projects. In order to avoid these performancerelated project failures, the performance properties should be obtained and analyzed already at the early design phase of a project. In this thesis we employ principles of component-based software engineering (CBSE), which enable building software systems from individual components. The advantage of CBSE is that individual components can be modeled, reused and traded. The main objective of this thesis is to develop a method that enables to predict the performance properties of a system, based on the performance properties of the involved individual components. The prediction method serves rapid prototyping and performance analysis of the architecture or related alternatives, without performing the usual testing and implementation stages. The involved research questions are as follows. How should the behaviour and performance properties of individual components be specified in order to enable automated composition of these properties into an analyzable model of a complete system? How to synthesize the models of individual components into a model of a complete system in an automated way, such that the resulting system model can be analyzed against the performance properties? The thesis presents a new framework called DeepCompass, which realizes the concept of predictable assembly throughout all phases of the system design. The cornerstones of the framework are the composable models of individual software components and hardware blocks. The models are specified at the component development time and shipped in a component package. At the component composition phase, the models of the constituent components are synthesized into an executable system model. Since the thesis focuses on performance properties, we introduce performance-related types of component models, such as behaviour, performance and resource models. The dynamics of the system execution are captured in scenario models. The essential advantage of the introduced models is that, through the behaviour of individual components and scenario models, the behaviour of the complete system is synthesized in the executable system model. Further simulation-based analysis of the obtained executable system model provides application-specific and system-specific performance property values. To support the performance analysis, we have developed a CARAT software toolkit that provides and automates the algorithms for model synthesis and simulation. Besides this, the toolkit provides graphical tools for designing alternative architectures and visualization of obtained performance properties. We have conducted an empirical case study on the use of scenarios in the industry to analyze the system performance at the early design phase. It was found that industrial architects make extensive use of scenarios for performance evaluation. Based on the inputs of the architects, we have provided a set of guidelines for identification and use of performance-critical scenarios. At the end of this thesis, we have validated the DeepCompass framework by performing three case studies on performance prediction of real-time systems: an MPEG-4 video decoder, a Car Radio Navigation system and a JPEG application. For each case study, we have constructed models of the individual components, defined the SW/HW architecture, and used the CARAT toolkit to synthesize and simulate the executable system model. The simulation provided the predicted performance properties, which we later compared with the actual performance properties of the realized systems. With respect to resource usage properties and average task latencies, the variation of the prediction error showed to be within 30% of the actual performance. Concerning the pick loads on the processor nodes, the actual values were sometimes three times larger than the predicted values. As a conclusion, the framework has proven to be effective in rapid architecture prototyping and performance analysis of a complete system. This is valid, as in the case studies we have spent not more than 4-5 days on the average for the complete iteration cycle, including the design of several architecture alternatives. The framework can handle different architectural styles, which makes it widely applicable. A conceptual limitation of the framework is that it assumes that the models of individual components are already available at the design phase
Emergence through conflict : the Multi-Disciplinary Design System (MDDS)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Architecture, 2009.Includes bibliographical references (p. 413-430).This dissertation proposes a framework and a group of systematic methodologies to construct a computational Multi-Disciplinary Design System (MDDS) that can support the design of complex systems within a variety of domains. The way in which the resulting design system is constructed, and the capabilities it brings to bare, are totally different from the methods used in traditional sequential design. The MDDS embraces diverse areas of research that include design science, systems theory, artificial intelligence, design synthesis and generative algorithms, mathematical modeling and disciplinary analyses, optimization theory, data management and model integration, and experimental design among many others. There are five phases to generate the MDDS. These phases involve decomposition, formulation, modeling, integration, and exploration. These phases are not carried out in a sequential manner, but rather in a continuous move back and forth between the different phases. The process of building the MDDS begins with a top-down decomposition of a design concept. The design, seen as an object, is decomposed into its components and aspects, while the design, seen as a process, is decomposed into developmental levels and design activities. Then based on the process decomposition, the architecture of the MDDS is formulated into hierarchical levels each of which comprises a group of design cycles that include design modules at different degrees of abstraction. Based on the design object decomposition, the design activities which include synthesis, analysis, evaluation and optimization are modeled within the design modules.(cont.) Subsequently through a bottom-up approach, the design modules are integrated into a data flow network. This network forms MDDS as an integrated system that acts as a holistic structured functional unit that explores the design space in search of satisfactory solutions. The MDDS emergent properties are not detectable through the properties and behaviors of its parts, and can only be enucleated through a holistic approach. The MDDS is an adaptable system that is continuously dependent on, and responsive to, the uncertainties of the design process. The evolving MDDS is thus characterized a multi-level, multi-module, multi-variable and multi-resolution system. Although the MDDS framework is intended to be domain-independent, several MDDS prototypes were developed within this dissertation to generate exploratory building designs.by Anas Alfaris.Ph.D
Increasing Reuse in Component Models through Genericity
A current limitation to component reusability is that component models target to describe a deployed assembly and thus bind the behavior of a component to the data-types it manipulates. This paper studies the feasibility of supporting genericity within component models, including component and port types. The proposed approach works by extending the meta-model of an existing component model. It is applied to the SCA component model; a working prototype shows its feasibility
Infrastructure for Deployment of Heterogeneous Component-based Applications
Deployment is a process which involves all actions performed with an application after it is released. Traditionally, deployment has been addressed for each component model separately (if at all), even though most of the concepts are the same. The Deployment and Configuration of Component-based Applications Specification released by OMG proposes a unified approach that can be tailored to different component models. This thesis focuses on the execution phases of the deployment process. It presents a generic deployment runtime based on the OMG specification. The main objective is to elaborate support for multiple component models and subsequently support for heterogeneous applications consisting of components implemented in different component models. This has been achieved through a system of extensions which allows isolating component model specifics from the runtime. Even though the OMG specification was not originally intended to support heterogeneous applications, the implementation deviates from it only in a few points. In all such cases, the thesis presents an analysis of the situation and rationale for the deviation.Nasazení aplikace je proces zahrnující všechny činnosti prováděné s aplikací od momentu jejího vydání. Různé komponentové modely řeší tyto aktivity odděleně (pokud vůbec), přestože koncepce je většinou stejná. Formální kodument Deployment and Configuration of Component-based Distributed Applications Specification vydaný organizací OMG navrhuje jednotné řešení, které může být přizpůsobeno pro různé komponentové modely. Tato práce se soustředí na část problému týkající se spouštění aplikací a prezentuje jednotnou infrastrukturu založenou na uvedené specifikaci. Hlavním cílem je prozkoumat možnosti podpory více komponentových modelů a následně heterogenních aplikací, které sestávají z komponent implementovaných v různých komponentových modelech. Toho bylo dosaženo navržením systému rozšíření umožňujících odstínit specifika jednotlivých komponentových modelů od společné infrastruktury. Přestože zmíněná specifikace nebyla určena pro podporu heterogenních aplikací, implementace se od ní odchyluje jen v několika málo bodech. Ve všech takových případech je prezentována analýza situace a odůvodnění příslušné odchylky.Katedra softwarového inženýrstvíDepartment of Software EngineeringFaculty of Mathematics and PhysicsMatematicko-fyzikální fakult
- …