9 research outputs found

    How OEMs and Suppliers can face the Network Integration Challenges

    Get PDF
    International audienceSystems integration is a major challenge in many industries. Systematic analysis of the complex integration effects, especially with respect to timing and performance, significantly improves the design process, enables optimizations, and increases the quality and profit of a product. And it helps to improve supply-chain communications. This paper surveys a set of interesting experiments we have conducted on a real-world automotive communication network using our new SymTA/S schedulability analysis technology. We demonstrate that, and how, analysis technology helps answering key integration questions, thereby carefully respecting the established business models

    Quantifying the Flexibility of Real-Time Systems

    Get PDF
    International audienceIn this paper we define the flexibility of a system as its capability to schedule a new task. We present an approach to quantify the flexibility of a system. More importantly, we show that it is possible under certain conditions to identify the task that will directly induce the limitations on a possible software update. If performed at design time, such a result can be used to adjust the system design by giving more slack to the limiting task. We illustrate how these results apply to a simple system

    Exact Scalable Sensitivity Analysis for the Next Release Problem

    Get PDF
    The nature of the requirements analysis problem, based as it is on uncertain and often inaccurate estimates of costs and effort, makes sensitivity analysis important. Sensitivity analysis allows the decision maker to identify those requirements and budgets that are particularly sensitive to misestimation. However, finding scalable sensitivity analysis techniques is not easy because the underlying optimization problem is NP-hard. This article introduces an approach to sensitivity analysis based on exact optimization. We implemented this approach as a tool, OATSAC, which allowed us to experimentally evaluate the scalability and applicability of Requirements Sensitivity Analysis (RSA). Our results show that OATSAC scales sufficiently well for practical applications in Requirements Sensitivity Analysis. We also show how the sensitivity analysis can yield insights into difficult and otherwise obscure interactions between budgets, requirements costs, and estimate inaccuracies using a real-world case study

    Timing in Technischen Sicherheitsanforderungen für Systementwürfe mit heterogenen Kritikalitätsanforderungen

    Get PDF
    Traditionally, timing requirements as (technical) safety requirements have been avoided through clever functional designs. New vehicle automation concepts and other applications, however, make this harder or even impossible and challenge design automation for cyber-physical systems to provide a solution. This thesis takes upon this challenge by introducing cross-layer dependency analysis to relate timing dependencies in the bounded execution time (BET) model to the functional model of the artifact. In doing so, the analysis is able to reveal where timing dependencies may violate freedom from interference requirements on the functional layer and other intermediate model layers. For design automation this leaves the challenge how such dependencies are avoided or at least be bounded such that the design is feasible: The results are synthesis strategies for implementation requirements and a system-level placement strategy for run-time measures to avoid potentially catastrophic consequences of timing dependencies which are not eliminated from the design. Their applicability is shown in experiments and case studies. However, all the proposed run-time measures as well as very strict implementation requirements become ever more expensive in terms of design effort for contemporary embedded systems, due to the system's complexity. Hence, the second part of this thesis reflects on the design aspect rather than the analysis aspect of embedded systems and proposes a timing predictable design paradigm based on System-Level Logical Execution Time (SL-LET). Leveraging a timing-design model in SL-LET the proposed methods from the first part can now be applied to improve the quality of a design -- timing error handling can now be separated from the run-time methods and from the implementation requirements intended to guarantee them. The thesis therefore introduces timing diversity as a timing-predictable execution theme that handles timing errors without having to deal with them in the implemented application. An automotive 3D-perception case study demonstrates the applicability of timing diversity to ensure predictable end-to-end timing while masking certain types of timing errors.Traditionell wurden Timing-Anforderungen als (technische) Sicherheitsanforderungen durch geschickte funktionale Entwürfe vermieden. Neue Fahrzeugautomatisierungskonzepte und Anwendungen machen dies jedoch schwieriger oder gar unmöglich; Aufgrund der Problemkomplexität erfordert dies eine Entwurfsautomatisierung für cyber-physische Systeme heraus. Diese Arbeit nimmt sich dieser Herausforderung an, indem sie eine schichtenübergreifende Abhängigkeitsanalyse einführt, um zeitliche Abhängigkeiten im Modell der beschränkten Ausführungszeit (BET) mit dem funktionalen Modell des Artefakts in Beziehung zu setzen. Auf diese Weise ist die Analyse in der Lage, aufzuzeigen, wo Timing-Abhängigkeiten die Anforderungen an die Störungsfreiheit auf der funktionalen Schicht und anderen dazwischenliegenden Modellschichten verletzen können. Für die Entwurfsautomatisierung ergibt sich daraus die Herausforderung, wie solche Abhängigkeiten vermieden oder zumindest so eingegrenzt werden können, dass der Entwurf machbar ist: Das Ergebnis sind Synthesestrategien für Implementierungsanforderungen und eine Platzierungsstrategie auf Systemebene für Laufzeitmaßnahmen zur Vermeidung potentiell katastrophaler Folgen von Timing-Abhängigkeiten, die nicht aus dem Entwurf eliminiert werden. Ihre Anwendbarkeit wird in Experimenten und Fallstudien gezeigt. Allerdings werden alle vorgeschlagenen Laufzeitmaßnahmen sowie sehr strenge Implementierungsanforderungen für moderne eingebettete Systeme aufgrund der Komplexität des Systems immer teurer im Entwurfsaufwand. Daher befasst sich der zweite Teil dieser Arbeit eher mit dem Entwurfsaspekt als mit dem Analyseaspekt von eingebetteten Systemen und schlägt ein Entwurfsparadigma für vorhersagbares Timing vor, das auf der System-Level Logical Execution Time (SL-LET) basiert. Basierend auf einem Timing-Entwurfsmodell in SL-LET können die vorgeschlagenen Methoden aus dem ersten Teil nun angewandt werden, um die Qualität eines Entwurfs zu verbessern -- die Behandlung von Timing-Fehlern kann nun von den Laufzeitmethoden und von den Implementierungsanforderungen, die diese garantieren sollen, getrennt werden. In dieser Arbeit wird daher Timing Diversity als ein Thema der Timing-Vorhersage in der Ausführung eingeführt, das Timing-Fehler behandelt, ohne dass sie in der implementierten Anwendung behandelt werden müssen. Anhand einer Fallstudie aus dem Automobilbereich (3D-Umfeldwahrnehmung) wird die Anwendbarkeit von Timing-Diversität demonstriert, um ein vorhersagbares Ende-zu-Ende-Timing zu gewährleisten und gleichzeitig in der Lage zu sein, bestimmte Arten von Timing-Fehlern zu maskieren

    Computational Imaging Approach to Recovery of Target Coordinates Using Orbital Sensor Data

    Get PDF
    This dissertation addresses the components necessary for simulation of an image-based recovery of the position of a target using orbital image sensors. Each component is considered in detail, focusing on the effect that design choices and system parameters have on the accuracy of the position estimate. Changes in sensor resolution, varying amounts of blur, differences in image noise level, selection of algorithms used for each component, and lag introduced by excessive processing time all contribute to the accuracy of the result regarding recovery of target coordinates using orbital sensor data. Using physical targets and sensors in this scenario would be cost-prohibitive in the exploratory setting posed, therefore a simulated target path is generated using Bezier curves which approximate representative paths followed by the targets of interest. Orbital trajectories for the sensors are designed on an elliptical model representative of the motion of physical orbital sensors. Images from each sensor are simulated based on the position and orientation of the sensor, the position of the target, and the imaging parameters selected for the experiment (resolution, noise level, blur level, etc.). Post-processing of the simulated imagery seeks to reduce noise and blur and increase resolution. The only information available for calculating the target position by a fully implemented system are the sensor position and orientation vectors and the images from each sensor. From these data we develop a reliable method of recovering the target position and analyze the impact on near-realtime processing. We also discuss the influence of adjustments to system components on overall capabilities and address the potential system size, weight, and power requirements from realistic implementation approaches

    Mining Event Traces from Real-time Systems for Anomaly Detection

    Get PDF
    Real-time systems are a significant class of applications, poised to grow even further as autonomous vehicles and the Internet of Things (IoT) become a reality. The computation and communication tasks of the underlying embedded systems must comply with strict timing and safety requirements as undetected defects in these systems may lead to catastrophic failures. The runtime behavior of these systems is prone to uncertainties arising from dynamic workloads and extra-functional conditions that affect both the software and hardware over the course of their deployment, e.g., unscheduled firmware updates, communication channel saturation, power-saving mode switches, or external malicious attacks. The operation in such unpredictable environments prevents the detection of anomalous behavior using traditional formal modeling and analysis techniques as they generally consider worst-case analysis and tend to be overly conservative. To overcome these limitations, and primarily motivated by the increasing availability of generated traces from real-time embedded systems, this thesis presents TRACMIN - Trace Mining using Arrival Curves - which is an anomaly detection approach that empirically constructs arrival curves from event traces to capture the recurrent behavior and intrinsic features of a given real-time system. The thesis uses TRACMIN to fill the gap between formal analysis techniques of real-time systems and trace mining approaches that lack expressive, human-readable, and scalable methods. The thesis presents definitions, metrics, and tools to employ statistical learning techniques to cluster and classify traces generated from different modes of normal operation versus anomalous traces. Experimenting with multiple datasets from deployed real-time embedded systems facing performance degradation and hardware misconfiguration anomalies demonstrates the feasibility and viability of our approaches on timestamped event traces generated from an industrial real-time operating system. Acknowledging the high computation expense for constructing empirical arrival curves, the thesis provides a rapid algorithm to achieve desirable scalability on lengthy traces paving the way for adoption in research and industry. Finally, the thesis presents a robustness analysis for the arrival curves models by employing theories of demand-bound functions from the scheduling domain. The analysis provides bounds on how much disruption a real-time system modeled using our approach can tolerate before being declared anomalous, which is crucial for specification and certification purposes. In conclusion, TRACMIN combines empirical and theoretical methods to provide a concrete anomaly detection framework that uses robust models of arrival curves scalably constructed from event traces to detect anomalies that affect the recurrent behavior of a real-time system

    Compositional Scheduling Analysis Using Standard Event Models

    Get PDF
    Embedded real-time systems must meet a variety of timing requirements, such as deadlines and limited load or bandwidth. These properties depend heavily on interactions between tasks and on the scheduling of tasks and communications. Unfortunately, the current practice of specialization and re-use results in increasingly heterogeneous systems, which specifically complicates the scheduling analysis problem. Todays best practice of timed simulation is increasingly unreliable, mainly because the corner cases are extremely difficult to find and debug. As an alternative, a variety of systematic and formal approaches to scheduling analysis have been proposed. Most of them, however, are either limited to sub-problems, or use unwieldy and complex models that distract designers in practice. This thesis presents a novel, structured analysis procedure that a) can cope with the increasing complexity and heterogeneity of embedded systems, b) provides the modularity and flexibility that the established, re-use driven system integration style requires, and c) facilitates system integration using a comprehensible analytical model. The approach uses intuitive and standardized event models to represent the interfaces between different components and their scheduling. The clear interface structure allows -for the first time- the modular composition of heterogeneous sub-system analysis techniques. This provides designers with the flexibility to use their preferred scheduling and analysis techniques locally without compromising global scheduling analysis. This new analysis procedure has been implemented in the SymTA/S tool. As it can be efficiently applied in practice, it provides a serious and promising complement to simulation.Eingebettete Echtzeitsysteme müssen eine Vielzahl von Zeit- und Performanzanforderungen erfüllen, z.B. maximale Reaktionszeiten oder vorgegebene Kommunikationsbandbreiten. Die Echtzeiteigenschaften hängen stark vom Zusammenspiel der Einzelkomponenten sowie deren Scheduling ab. Unglücklicherweise führt gerade die in der Praxis etablierte Wiederverwendung von spezialisierten Komponenten zu einer Heterogenität, die die Schedulinganalyse zusätzlich erschwert. Die heute eingesetzten Simulationsverfahren sind zusehends unzuverlässig, da die kritischen Randfälle in der Praxis kaum mehr vollständig bestimmt werden können. Als Alternative wurde eine Vielzahl systematischer und formaler Ansätze vorgeschlagen. Meist sind diese jedoch entweder auf spezielle Teilprobleme beschränkt oder für den Allgemeinfall zu unhandlich und finden daher nur eine geringe Akzeptanz in der industriellen Praxis. In dieser Arbeit wird ein neues Verfahren zur Schedulinganalyse vorgestellt, das a) die steigende Komplexität und Heterogenität angemessen erfasst, b) über die Modularität und Flexibilität verfügt, die mit Wiederverwendung und Integration erforderlich ist, und c) die Integration durch ein nachvollziehbares Analysemodell unterstützt. Das Analysemodell erfasst die komplexen Abhängigkeiten zwischen Komponenten mit Hilfe von intuitiven, standardisierten Ereignismodellen. Die klare Strukturierung dieser Schnittstellen erlaubt erstmals die modulare Komposition von Analysen heterogener Systemteile. Dies gibt Entwicklern die nötige Flexibilität, ihre bevorzugten lokalen Entwurfsmethoden zu benutzen, ohne auf die globale Schedulinganalyse verzichten zu müssen. Das Verfahren bildet die Grundlage für das SymTA/S Analysewerkzeug und ist in der Praxis sehr effizient einsetzbar, womit sich eine ernstzunehmende und viel versprechende Ergänzung zur heute etablierten Performanz-Simulation eröffnet

    Search Based Software Project Management

    Get PDF
    This thesis investigates the application of Search Based Software Engineering (SBSE) approach in the field of Software Project Management (SPM). With SBSE approaches, a pool of candidate solutions to an SPM problem is automatically generated and gradually evolved to be increasingly more desirable. The thesis is motivated by the observation from industrial practice that it is much more helpful to the project manager to provide insightful knowledge than exact solutions. We investigate whether SBSE approaches can aid the project managers in decision making by not only providing them with desirable solutions, but also illustrating insightful “what-if” scenarios during the phases of project initiation, planning and enactment. SBSE techniques can automatically “evolve” solutions to software requirement elicitation, project staffing and scheduling problems. However, the current state-of- the-art computer-aided software project management tools remain limited in several aspects. First, software requirement engineering is plagued by problems associated with unreliable estimates. The estimations made early are assumed to be accurate, but the projects are estimated and executed in an environment filled with uncertainties that may lead to delay or disruptions. Second, software project scheduling and staffing are two closely related problems that have been studied separately by most published research in the field of computer aided software project management, but software project managers are usually confronted with the complex trade-off and correlations of scheduling and staffing. Last, full attendance of required staff is usually assumed after the staff have been assigned to the project, but the execution of a project is subject to staff absences because of sickness and turnover, for example. This thesis makes the following main contributions: (1) Introducing an automated SBSE approach to Sensitivity Analysis for requirement elicitation, which helps to achieve more accurate estimations by directing extra estimation effort towards those error-sensitive requirements and budgets. (2) Demonstrating that Co-evolutionary approaches can simultaneously co-evolve solutions for both work package sequencing and project team sizing. The proposed approach to these two interrelated problems yields better results than random and single-population evolutionary algorithms. (3) Presenting co-evolutionary approaches that can guide the project manager to anticipate and ameliorate the impact of staff absence. (4) The investigations of seven sets of real world data on software requirement and software project plans reveal general insights as well as exceptions of our approach in practise. (5) The establishment of a tool that implements the above concepts. These contributions support the thesis that automated SBSE tools can be beneficial to solution generation, and most importantly, insightful knowledge for decision making in the practise of software project management

    Applying Sensitivity Analysis in Real-Time Distributed Systems

    No full text
    During real-world design of embedded real-time systems, it cannot be expected that all performance data required for scheduling analysis is fully available up front. In such situations, sensitivity analysis is a promising approach to deal with uncertainties that result from incomplete specifications, early performance estimates, late feature requests, and so on. Sensitivity analysis allows the system designer to keep track of the flexibility of the system, and thus to quickly assess the impact of changes of individual hardware and software components on system performance. In this paper we integrate sensitivity analysis into our system-level performance analysis framework SymTA/S and show its benefits during the design of complex, networked multi-processor embedded real-time systems
    corecore