163 research outputs found

    Towards Integrated Variant Management in Global Software Engineering: An Experience Report

    Get PDF
    In the automotive domain, customer demands and market constraints are progressively realized by electric/ electronic components and corresponding software. Variant traceability in SPL is crucial in the context of different tasks, like change impact analysis, especially in complex global software projects. In addition, traceability concepts must be extended by partly automated variant configuration mechanisms to handle restrictions and dependencies between variants. Such variant configuration mechanism helps to reduce complexity when configuring a valid variant and to establish an explicit documentation of dependencies between components. However, integrated variant management has not been sufficiently addressed so far. Especially, the increasing number of software variants requires an examination of traceable and configurable software variants over the software lifecycle. This paper emphasizes variant traceability achievements in a large global software engineering project, elaborates existing challenges, and evaluates an industrial usage of an integrated variant management based on experiences

    A Pattern-based Approach towards Modular Safety Analysis and Argumentation

    Get PDF
    International audienceSafety standards recommend (if not dictate) performing many analyses during the concept phase of development as well as the early adoption of multiple measures at the architectural design level. In practice, the reuse of architectural measures or safety mechanisms is widely-spread, especially in well-understood domains, as is reusing the corresponding safety-cases aiming to document and prove the fulfillment of the underlying safety goals. Safety-cases in the automotive domain are not well-integrated into architectural models and as such do not provide comprehensible and reproducible argumentation nor any evidence for argument correctness. The reuse is mostly ad-hoc, with loss of knowledge and traceability and lack of consistency or process maturity as well as being the most widely spread and cited drawbacks.Using a simplified description of software functions and their most common error management subtypes (avoidance, detection, handling, ..) we propose to define a pattern library covering known solution algorithms and architectural measures/constraints in a seamless holistic model-based approach with corresponding tool support. The pattern libraries would comprise the requirement the pattern covers and the architecture elements/ measures / constraints required and may include deployment or scheduling strategies as well as the supporting safety case template, which would then be integrated into existing development environments. This paper explores this approach using an illustrative example

    Forecasting dose-time profiles of solar particle events using a dosimetry-based Bayesian forecasting methodology

    Get PDF
    A dosimetery-based Bayesian methodology for forecasting astronaut radiation doses in deep space due to radiologically significant solar particle event proton fluences is developed. Three non-linear sigmoidal growth curves (Gompertz, Weibull, logistic) are used with hierarchical, non-linear, regression models to forecast solar particle event dose-time profiles from doses obtained early in the development of the event. Since there are no detailed measurements of dose versus time for actual events, surrogate dose data are provided by calculational methods. Proton fluence data are used as input to the deterministic, coupled neutron-proton space radiation computer code, BRYNTRN, for transporting protons and their reaction products (protons, neutrons, 2H, 3H, ³He, and He) through aluminum shielding material and water. Calculated doses and dose rates for ten historical solar particle events are used as the input data by grouping similar historical solar particle events, using asymptotic dose and maximum dose rate as the grouping criteria. These historical data are then used to lend strength to predictions of dose and dose rate-time profiles for new solar particle events. Bayesian inference techniques are used to make parameter estimates and predictive forecasts. Due to the difficulty in performing the numerical integrations necessary to calculate posterior parameter distributions and posterior predictive distributions, Markov Chain Monte Carlo (MCMC) methods are used to sample from the posterior distributions. Hierarchical, non-linear regression models provide useful predictions of asymptotic dose and dose-time profiles for the November 8, 2000 and August 12, 1989 solar particle events. Predicted dose rate-time profiles are adequate for the November 8, 2000 solar particle event. Predicitions of dose rate-time profiles for the August 12, 1989 solar particle event suffer due to a more complex dose rate- time profile. Model assessment indicates adequate fits of the data. Model comparison results clearly indicate preference for the Weibull model for both events. Forecasts provide a valuable tool to space operations planners when making recommendations concerning operations in which radiological exposure might jeopardize personal safety or mission completion. This work demonstrates that Bayesian inference methods can be used to make forecasts of dose and dose rate-time profiles early in the evolution of solar particle events. Bayesian inference methods provide a coherent methodology for quantifying uncertainty. Hierarchical models provide a natural framework for the prediction of new solar particle event dose and dose rate-time profiles

    Curracurrong: a stream processing system for distributed environments

    Get PDF
    Advances in technology have given rise to applications that are deployed on wireless sensor networks (WSNs), the cloud, and the Internet of things. There are many emerging applications, some of which include sensor-based monitoring, web traffic processing, and network monitoring. These applications collect large amount of data as an unbounded sequence of events and process them to generate a new sequences of events. Such applications need an adequate programming model that can process large amount of data with minimal latency; for this purpose, stream programming, among other paradigms, is ideal. However, stream programming needs to be adapted to meet the challenges inherent in running it in distributed environments. These challenges include the need for modern domain specific language (DSL), the placement of computations in the network to minimise energy costs, and timeliness in real-time applications. To overcome these challenges we developed a stream programming model that achieves easy-to-use programming interface, energy-efficient actor placement, and timeliness. This thesis presents Curracurrong, a stream data processing system for distributed environments. In Curracurrong, a query is represented as a stream graph of stream operators and communication channels. Curracurrong provides an extensible stream operator library and adapts to a wide range of applications. It uses an energy-efficient placement algorithm that optimises communication and computation. We extend the placement problem to support dynamically changing networks, and develop a dynamic program with polynomially bounded runtime to solve the placement problem. In many stream-based applications, real-time data processing is essential. We propose an approach that measures time delays in stream query processing; this model measures the total computational time from input to output of a query, i.e., end-to-end delay

    Curracurrong: a stream processing system for distributed environments

    Get PDF
    Advances in technology have given rise to applications that are deployed on wireless sensor networks (WSNs), the cloud, and the Internet of things. There are many emerging applications, some of which include sensor-based monitoring, web traffic processing, and network monitoring. These applications collect large amount of data as an unbounded sequence of events and process them to generate a new sequences of events. Such applications need an adequate programming model that can process large amount of data with minimal latency; for this purpose, stream programming, among other paradigms, is ideal. However, stream programming needs to be adapted to meet the challenges inherent in running it in distributed environments. These challenges include the need for modern domain specific language (DSL), the placement of computations in the network to minimise energy costs, and timeliness in real-time applications. To overcome these challenges we developed a stream programming model that achieves easy-to-use programming interface, energy-efficient actor placement, and timeliness. This thesis presents Curracurrong, a stream data processing system for distributed environments. In Curracurrong, a query is represented as a stream graph of stream operators and communication channels. Curracurrong provides an extensible stream operator library and adapts to a wide range of applications. It uses an energy-efficient placement algorithm that optimises communication and computation. We extend the placement problem to support dynamically changing networks, and develop a dynamic program with polynomially bounded runtime to solve the placement problem. In many stream-based applications, real-time data processing is essential. We propose an approach that measures time delays in stream query processing; this model measures the total computational time from input to output of a query, i.e., end-to-end delay

    Model-Based Engineering of Collaborative Embedded Systems

    Get PDF
    This Open Access book presents the results of the "Collaborative Embedded Systems" (CrESt) project, aimed at adapting and complementing the methodology underlying modeling techniques developed to cope with the challenges of the dynamic structures of collaborative embedded systems (CESs) based on the SPES development methodology. In order to manage the high complexity of the individual systems and the dynamically formed interaction structures at runtime, advanced and powerful development methods are required that extend the current state of the art in the development of embedded systems and cyber-physical systems. The methodological contributions of the project support the effective and efficient development of CESs in dynamic and uncertain contexts, with special emphasis on the reliability and variability of individual systems and the creation of networks of such systems at runtime. The project was funded by the German Federal Ministry of Education and Research (BMBF), and the case studies are therefore selected from areas that are highly relevant for Germany’s economy (automotive, industrial production, power generation, and robotics). It also supports the digitalization of complex and transformable industrial plants in the context of the German government's "Industry 4.0" initiative, and the project results provide a solid foundation for implementing the German government's high-tech strategy "Innovations for Germany" in the coming years

    Microsimulation as an Instrument to Evaluate Economic and Social Programmes

    Get PDF
    In recent years microsimulation models (MSMs) have been increasingly applied in quantitative analyses of the individual impacts of economic and social programme policies. The suitability of using microsimulation as an instrument to analyze main and side policy impacts at the individual level will be discussed in this paper by characterizing: the general approach and principles of the two general microsimulation approaches: static and dynamic (cross-section and lifecycle) microsimulation, the structure of MSMs with institutional regulations and behavioural response, panel data and behavioural change, deterministic and stochastic microsimulation, the 4M-strategy to combine microtheory, microdata, microestimation and microsimulation, and pinpointing applications and recent developments. To demonstrate the evaluation of economic and social programmes by microsimulation, two examples concerning a dynamic (cross-section and life-cycle) microsimulation of the German retirement pension reform and a combined static/dynamic microsimulation of the recent German tax reform with its behavioural impacts on formal and informal economic activities of private households are briefly described. Evaluating the evaluation of economic and social programmes with microsimulation models finally is followed by concluding remarks about some future developments.microsimulation, evaluation of economic and social-political programms
    corecore