312,811 research outputs found

    Model-driven performance evaluation for service engineering

    Get PDF
    Service engineering and service-oriented architecture as an integration and platform technology is a recent approach to software systems integration. Software quality aspects such as performance are of central importance for the integration of heterogeneous, distributed service-based systems. Empirical performance evaluation is a process of measuring and calculating performance metrics of the implemented software. We present an approach for the empirical, model-based performance evaluation of services and service compositions in the context of model-driven service engineering. Temporal databases theory is utilised for the empirical performance evaluation of model-driven developed service systems

    Quality-aware model-driven service engineering

    Get PDF
    Service engineering and service-oriented architecture as an integration and platform technology is a recent approach to software systems integration. Quality aspects ranging from interoperability to maintainability to performance are of central importance for the integration of heterogeneous, distributed service-based systems. Architecture models can substantially influence quality attributes of the implemented software systems. Besides the benefits of explicit architectures on maintainability and reuse, architectural constraints such as styles, reference architectures and architectural patterns can influence observable software properties such as performance. Empirical performance evaluation is a process of measuring and evaluating the performance of implemented software. We present an approach for addressing the quality of services and service-based systems at the model-level in the context of model-driven service engineering. The focus on architecture-level models is a consequence of the black-box character of services

    Detecting the Onset of Dementia using Context-Oriented Architecture

    Get PDF
    In the last few years, Aspect Oriented Software De- velopment (AOSD) and Context Oriented Software Development (COSD) have become interesting alternatives for the design and construction of self-adaptive software systems. An analysis of these technologies shows them all to employ the principle of the separation of concerns, Model Driven Architecture (MDA) and Component-based Software Development (CBSD) for building high quality of software systems. In general, the ultimate goal of these technologies is to be able to reduce development costs and effort, while improving the adaptability, and dependability of software systems. COSD, has emerged as a generic devel- opment paradigm towards constructing self-adaptive software by integrating MDA with context-oriented component model. The self-adaptive applications are developed using a Context- Oriented Component-based Applications Model-Driven Architec- ture (COCA-MDA), which generates an Architecture Description language (ADL) presenting the architecture as a components- based software system. COCA-MDA enables the developers to modularise the application based on their context-dependent behaviours, and separate the context-dependent functionality from the context-free functionality of the application. In this article, we wish to study the impact of the decomposition mechanism performed in MDA approaches over the software self-adaptability. We argue that a better and significant advance in software modularity based on context information can increase software adaptability and increase their performance and modi- fiability

    GRAPHICAL MODELING AND SIMULATION OF A HYBRID HETEROGENEOUS AND DYNAMIC SINGLE-CHIP MULTIPROCESSOR ARCHITECTURE

    Get PDF
    A single-chip, hybrid, heterogeneous, and dynamic shared memory multiprocessor architecture is being developed which may be used for real-time and non-real-time applications. This architecture can execute any application described by a dataflow (process flow) graph of any topology; it can also dynamically reconfigure its structure at the node and processor architecture levels and reallocate its resources to maximize performance and to increase reliability and fault tolerance. Dynamic change in the architecture is triggered by changes in parameters such as application input data rates, process execution times, and process request rates. The architecture is a Hybrid Data/Command Driven Architecture (HDCA). It operates as a dataflow architecture, but at the process level rather than the instruction level. This thesis focuses on the development, testing and evaluation of a new graphic software (hdca) developed to first do a static resource allocation for the architecture to meet timing requirements of an application and then hdca simulates the architecture executing the application using statically assigned resources and parameters. While simulating the architecture executing an application, the software graphically and dynamically displays parameters and mechanisms important to the architectures operation and performance. The new graphical software is able to show system and node level dynamic capability of the HDCA. The newly developed software can model a fixed or varying input data rate. The model also allows fault tolerance analysis of the architecture

    NL4Py: Agent-Based Modeling in Python with Parallelizable NetLogo Workspaces

    Full text link
    NL4Py is a NetLogo controller software for Python, for the rapid, parallel execution of NetLogo models. NL4Py provides both headless (no graphical user interface) and GUI NetLogo workspace control through Python. Spurred on by the increasing availability of open-source computation and machine learning libraries on the Python package index, there is an increasing demand for such rapid, parallel execution of agent-based models through Python. NetLogo, being the language of choice for a majority of agent-based modeling driven research projects, requires an integration to Python for researchers looking to perform statistical analyses of agent-based model output using these libraries. Unfortunately, until the recent introduction of PyNetLogo, and now NL4Py, such a controller was unavailable. This article provides a detailed introduction into the usage of NL4Py and explains its client-server software architecture, highlighting architectural differences to PyNetLogo. A step-by-step demonstration of global sensitivity analysis and parameter calibration of the Wolf Sheep Predation model is then performed through NL4Py. Finally, NL4Py's performance is benchmarked against PyNetLogo and its combination with IPyParallel, and shown to provide significant savings in execution time over both configurations

    A Component-Based Middleware for a Reliable Distributed and Reconfigurable Spacecraft Onboard Computer

    Get PDF
    Emerging applications for space missions require increasing processing performance from the onboard computers. DLR's project “Onboard Computer - Next Generation” (OBC-NG) develops a distributed, reconfigurable computer architecture to provide increased performance while maintaining the high reliability of classical spacecraft computer architectures. Growing system complexity requires an advanced onboard middleware, handling distributed (realtime) applications and error mitigation by reconfiguration. The OBC-NG middleware follows the Component-Based Software Engineering (CBSE) approach. Using composite components, applications and management tasks can easily be distributed and relocated on the processing nodes of the network. Additionally, reuse of components for future missions is facilitated. This paper presents the flexible middleware architecture, the composite component framework, the middleware services and the model-driven Application Programming Interface (API) design of OBC-NG. Tests are conducted to validate the middleware concept and to investigate the reconfiguration efficiency as well as the reliability of the system. A relevant use case shows the advantages of CBSE for the development of distributed reconfigurable onboard software

    Master of Science

    Get PDF
    thesisTo address the need of understanding and optimizing the performance of complex applications and achieving sustained application performance across different architectures, we need performance models and tools that could quantify the theoretical performance and the resultant gap between theoretical and observed performance. This thesis proposes a benchmark-driven Roofline Model Toolkit to provide theoretical and achievable performance, and their resultant gap for multicore, manycore, and accelerated architectures. Roofline micro benchmarks are specialized to quantify the behavior of different architectural features. Compared to previous work on performance characterization, these micro benchmarks focus on capturing the performance of each level of the memory hierarchy, along with thread-level parallelism(TLP), instruction-level parallelism(ILP), and explicit Single Instruction, Multiple Data(SIMD) parallelism, measured in the context of the compilers and runtime environment on the target architecture. We also developed benchmarks to explore detailed memory subsystems behaviors and evaluate parallelization overhead. Beyond on-chip performance, we measure sustained Peripheral Component Interconnect Express(PCIe) throughput with four Graphics Processing Unit(GPU) memory managed mechanisms. By combining results from the architecture characterization with the Roofline Model based solely on architectural specification, this work offers insights for performance prediction of current and future architectures and their software systems. To that end, we instrument three applications and plot their resultant performance on the corresponding Roofline Model when run on a Blue Gene/Q architecture

    Non-functional property analysis using UML2.0 and model transformations

    Get PDF
    Real-time embedded architectures consist of software and hardware parts. Meeting non-functional constraints (e.g., real-time constraints) greatly depends on the mappings from the system functionalities to software and hardware components. Thus, there is a strong demand for precise architecture and allocation modeling, amenable to performance analysis. The report proposes a model-driven approach for the assessment of the quality of allocations of the system functionalities to the architecture. We consider two technical domains: the UML domain for the definition of the model elements (for both description and analysis), and an analysis domain, external to UML, used for formal verification. This report defines three meta-models, one for each domain, and provides automated transformations within and between these domains. A special attention is then paid to temporal property analysis, based on a particular analysis model: the Modular and Hierarchical Time Petri Nets
    • 

    corecore