32,391 research outputs found

    Frequency- and time-domain stochastic analysis of lossy and dispersive interconnects in a SPICE-like environment

    Get PDF
    This paper presents an improvement of the state-of-the-art polynomial chaos (PC) modeling of high-speed interconnects with parameter uncertainties via SPICE-like tools. While the previous model, due to its mathematical formulation, was limited to lossless lines, the introduction of modified classes of polynomials yields a formulation that allows to account for lossess and dispersion as well. Thanks to this, the new implementation can also take full advantage of the combination of the PC technique with macromodels that accurately describe the interconnect properties. An application example, i.e. the stochastic analysis of an on-chip line, validates and demonstrates the improved method

    Impact on signal integrity of interconnect variabilities

    Get PDF
    In this paper, literature results on the statistical simulation of lossy and dispersive interconnect networks with uncertain physical properties are extended to general nonlinear circuits. The approach is based on the expansion of circuit voltages and currents into polynomial chaos approximations. The derivation of deterministic circuit equivalents for nonlinear components allows to retrieve the unknown expansion coefficients with a single circuit simulation, that can be carried out via standard SPICE-type solvers. These coefficients provide direct statistical information. The methodology allows the inclusion of arbitrary nonlinear elements and is validated via transmission-line networks terminated by diodes and driven by inverter

    Automatic allocation of safety requirements to components of a software product line

    Get PDF
    Safety critical systems developed as part of a product line must still comply with safety standards. Standards use the concept of Safety Integrity Levels (SILs) to drive the assignment of system safety requirements to components of a system under design. However, for a Software Product Line (SPL), the safety requirements that need to be allocated to a component may vary in different products. Variation in design can indeed change the possible hazards incurred in each product, their causes, and can alter the safety requirements placed on individual components in different SPL products. Establishing common SILs for components of a large scale SPL by considering all possible usage scenarios, is desirable for economies of scale, but it also poses challenges to the safety engineering process. In this paper, we propose a method for automatic allocation of SILs to components of a product line. The approach is applied to a Hybrid Braking System SPL design

    Taming Uncertainty in the Assurance Process of Self-Adaptive Systems: a Goal-Oriented Approach

    Full text link
    Goals are first-class entities in a self-adaptive system (SAS) as they guide the self-adaptation. A SAS often operates in dynamic and partially unknown environments, which cause uncertainty that the SAS has to address to achieve its goals. Moreover, besides the environment, other classes of uncertainty have been identified. However, these various classes and their sources are not systematically addressed by current approaches throughout the life cycle of the SAS. In general, uncertainty typically makes the assurance provision of SAS goals exclusively at design time not viable. This calls for an assurance process that spans the whole life cycle of the SAS. In this work, we propose a goal-oriented assurance process that supports taming different sources (within different classes) of uncertainty from defining the goals at design time to performing self-adaptation at runtime. Based on a goal model augmented with uncertainty annotations, we automatically generate parametric symbolic formulae with parameterized uncertainties at design time using symbolic model checking. These formulae and the goal model guide the synthesis of adaptation policies by engineers. At runtime, the generated formulae are evaluated to resolve the uncertainty and to steer the self-adaptation using the policies. In this paper, we focus on reliability and cost properties, for which we evaluate our approach on the Body Sensor Network (BSN) implemented in OpenDaVINCI. The results of the validation are promising and show that our approach is able to systematically tame multiple classes of uncertainty, and that it is effective and efficient in providing assurances for the goals of self-adaptive systems

    Modeling the Effect of Oceanic Internal Waves on the Accuracy of Multibeam Echosounders

    Get PDF
    When ray bending corrections are applied to multibeam echosounder (MBES) data, it is assumed that the varying layers of sound speed lie along horizontally stratified planes. In many areas internal waves occur at the interface where the water’s density changes abruptly (a pycnocline), this density gradient is often associated with a strong gradient in sound speed (a velocline). The internal wave introduces uncertainty into the echo soundings through two mechanisms: (1) tilting of the velocline, and (2) vertical oscillation of the velocline’s depth. A model has been constructed in order to examine how these effects degrade the accuracy of MBES measurements. The model numerically simulates the 3D ray paths of MBES soundings for a synthetic flat seafloor, as though the soundings have been collected through a user-defined internal wave. Along with sound speed information, the ray paths are used to estimate travel times which are then utilized as inputs for a conventional 2D ray trace. The discrepancy between the 3D and 2D ray traced solutions serve as an estimate of uncertainty. The same software can be extended to model the expected anomalies associated with tidal fronts and other phenomena that result in significant tilting or oscillation of the velocline. A case study was undertaken using observed internal wave parameters on the Scotian Shelf. The case study examines how survey design parameters such as line spacing, direction of survey lines, and water column sampling density can influence the uncertainty introduced by internal waves. In particular, an examination is undertaken in which 2D ray tracing models are augmented with MBES water column imaging of the velocline. The investigation shows that internal waves have the potential to cause vertical uncertainties exceeding IHO standards and that the uncertainty can potentially be mitigated through appropriate survey design. Results from the case study also indicate that acoustic tracking of the velocline has the potential to counteract the effects of internal waves through augmentation of 2D ray tracing models. This technique is promising, however, much more research and field testing is required to ascertain the practicality, reliability and repeatability of such an approach

    Konsistente Feature Modell gesteuerte Softwareproduktlinien Evolution

    Get PDF
    SPLs are an approach to manage families of closely related software systems in terms of configurable functionality. A feature model captures common and variable functionalities of an SPL on a conceptual level in terms of features. Reusable artifacts, such as code, documentation, or tests are related to features using a feature-artifact mapping. A product of an SPL can be derived by selecting features in a configuration. Over the course of time, SPLs and their artifacts are subject to change. As SPLs are particularly complex, their evolution is a challenging task. Consequently, SPL evolution must be thoroughly planned well in advance. However, plans typically do not turn out as expected and, thus, replanning is required. Feature models lean themselves for driving SPL evolution. However, replanning of feature-model evolution can lead to inconsistencies and feature-model anomalies may be introduced during evolution. Along with feature-model evolution, other SPL artifacts, especially configurations, need to consistently evolve. The work of this thesis provides remedy to the aforementioned challenges by presenting an approach for consistent evolution of SPLs. The main contributions of this thesis can be distinguished into three key areas: planning and replanning feature-model evolution, analyzing feature-model evolution, and consistent SPL artifact evolution. As a starting point for SPL evolution, we introduce Temporal Feature Models (TFMs) that allow capturing the entire evolution timeline of a feature model in one artifact, i.e., past history, present changes, and planned evolution steps. We provide an execution semantics of feature-model evolution operations that guarantees consistency of feature-model evolution timelines. To keep feature models free from anomalies, we introduce analyses to detect anomalies in feature-model evolution timelines and explain these anomalies in terms of their causing evolution operations. To enable consistent SPL artifact evolution, we generalize the concept of modeling evolution timelines in TFMs to be applicable for any modeling language. Moreover, we provide a methodology that enables involved engineers to define and use guidance for configuration evolution.Softwareproduktlinien (SPLs) ermöglichen es, konfigurierbare Funktionalität von eng verwandten Softwaresystemen zu verwalten. In einem Feature Modell werden gemeinsame und variable Funktionalitäten einer SPL auf Basis abstrakter Features modelliert. Wiederverwendbare Artefakte werden in einem Feature-Artefakt Mapping Features zugeordnet. Ein Produkt einer SPL kann abgeleitet werden, indem Features in einer Konfiguration ausgewählt werden. Im Laufe der Zeit müssen sich SPLs und deren Artefakte verändern. Da SPLs ganze Softwarefamilien modellieren, ist deren Evolution eine besonders herausfordernde Aufgabe, die gründlich im Voraus geplant werden muss. Feature Modelle eignen sich besonders als Planungsmittel einer SPL. Umplanung von Feature Modell Evolution kann jedoch zu Inkonsistenzen führen und Feature Modell Anomalien können im Zuge der Evolution eingeführt werden. Im Anschluss an die Feature Modell Evolution muss die Evolution anderer SPL Artefakte, insbesondere Konfigurationen, konsistent modelliert werden. In dieser Arbeit wird ein Ansatz zur konsistenten Evolution von SPLs vorgestellt, der die zuvor genannten Herausforderungen adressiert. Die Beiträge dieser Arbeit lassen sich in drei Kernbereiche aufteilen: Planung und Umplanung von Feature Modell Evolution, Analyse von Feature Modell Evolution und konsistente Evolution von SPL Artefakten. Temporal Feature Models (TFMs) werden als Startpunkt für SPL Evolution eingeführt. In einem TFM wird die gesamte Evolutionszeitlinie eines Feature Modells in einem Artefakt abgebildet, was sowohl vergangene Änderungen, den aktuellen Zustand, als auch geplante Änderungen beinhaltet. Auf Basis einer Ausführungssemantik wird die Konsistenz von Feature Modell Evolutionszeitlinien sichergestellt. Um Feature Modelle frei von Anomalien zu halten, werden Analysen eingeführt, welche die gesamte Evolutionszeitlinie eines Feature Modells auf Anomalien untersucht und diese mit verursachenden Evolutionsoperationen erklärt. Das Konzept zur Modellierung von Feature Modell Evolutionszeitlinien aus TFMs wird verallgemeinert, um die gesamte Evolution von Modellen beliebiger Modellierungssprachen spezifizieren zu können. Des Weiteren wird eine Methodik vorgestellt, die beteiligten Ingenieuren eine geführte Evolution von Konfigurationen ermöglicht

    Nonparametric Econometrics: The np Package

    Get PDF
    We describe the R np package via a series of applications that may be of interest to applied econometricians. The np package implements a variety of nonparametric and semiparametric kernel-based estimators that are popular among econometricians. There are also procedures for nonparametric tests of significance and consistent model specification tests for parametric mean regression models and parametric quantile regression models, among others. The np package focuses on kernel methods appropriate for the mix of continuous, discrete, and categorical data often found in applied settings. Data-driven methods of bandwidth selection are emphasized throughout, though we caution the user that data-driven bandwidth selection methods can be computationally demanding.
    corecore