457 research outputs found

    PROACTIVE APPROACH TO THE INCIDENT AND PROBLEM MANAGEMENT IN COMMUNICATION NETWORKS

    Get PDF
    Proactive approach to communication network maintenance has the capability of enhancing the integrity and reliability of communication networks, as well as of reducing maintenance costs and overall number of incidents. This paper presents approaches to problem and incident prevention with the help of root-cause analysis, aligning that with the goal to foresee software performance. Implementation of proactive approach requires recognition of enterprise\u27s current level of maintenance better insights into available approaches and tools, as well as their comparison, interoperability, integration and further development. The approach we are proposing and elaborating in this paper lies on the construction of a metamodel of the problem management of information technology, particularly the proactive problem management. The metamodel is derived from the original ITIL specification and presented in an object-oriented fashion by using structure (class) diagrams conform to UML notation. Based on current research, appropriate metrics based on the concept of Key Performance Indicators is suggested

    An analysis of safety evidence management with the Structured Assurance Case Metamodel

    Get PDF
    SACM (Structured Assurance Case Metamodel) it a standard for assurance case specification and exchange. It consists of an argumentation metamodel and an evidence metamodel for justifying that a system satisfies certain requirements. For assurance of safety-critical systems, SACM can be used to manage safety evidence and to specify safety cases. The standard is a promising initiative towards harmonizing and improving system assurance practices, but its suitability for safety evidence management needs to be further studied. To this end, this paper studies how SACM 1.1 supports this activity according to requirements from industry and from prior work. We have analysed the notion of evidence in SACM, its evidence lifecycle, the classes and associations of the evidence metamodel, and the link of this metamodel with the argumentation one. As a result, we have identified several improvement opportunities and extension possibilities in SACM

    A User's Guide to the Brave New World of Designing Simulation Experiments

    Get PDF
    Many simulation practitioners can get more from their analyses by using the statistical theory on design of experiments (DOE) developed specifically for exploring computer models.In this paper, we discuss a toolkit of designs for simulationists with limited DOE expertise who want to select a design and an appropriate analysis for their computational experiments.Furthermore, we provide a research agenda listing problems in the design of simulation experiments -as opposed to real world experiments- that require more investigation.We consider three types of practical problems: (1) developing a basic understanding of a particular simulation model or system; (2) finding robust decisions or policies; and (3) comparing the merits of various decisions or policies.Our discussion emphasizes aspects that are typical for simulation, such as sequential data collection.Because the same problem type may be addressed through different design types, we discuss quality attributes of designs.Furthermore, the selection of the design type depends on the metamodel (response surface) that the analysts tentatively assume; for example, more complicated metamodels require more simulation runs.For the validation of the metamodel estimated from a specific design, we present several procedures.

    Simulation Modeling to Optimize Personalized Oncology

    Get PDF

    Comprehensible and Robust Knowledge Discovery from Small Datasets

    Get PDF
    Die Wissensentdeckung in Datenbanken (“Knowledge Discovery in Databases”, KDD) zielt darauf ab, nĂŒtzliches Wissen aus Daten zu extrahieren. Daten können eine Reihe von Messungen aus einem realen Prozess reprĂ€sentieren oder eine Reihe von Eingabe- Ausgabe-Werten eines Simulationsmodells. Zwei hĂ€ufig widersprĂŒchliche Anforderungen an das erworbene Wissen sind, dass es (1) die Daten möglichst exakt zusammenfasst und (2) in einer gut verstĂ€ndlichen Form vorliegt. EntscheidungsbĂ€ume (“Decision Trees”) und Methoden zur Entdeckung von Untergruppen (“Subgroup Discovery”) liefern Wissenszusammenfassungen in Form von Hyperrechtecken; diese gelten als gut verstĂ€ndlich. Um die Bedeutung einer verstĂ€ndlichen Datenzusammenfassung zu demonstrieren, erforschen wir Dezentrale intelligente Netzsteuerung — ein neues System, das die Bedarfsreaktion in Stromnetzen ohne wesentliche Änderungen in der Infrastruktur implementiert. Die bisher durchgefĂŒhrte konventionelle Analyse dieses Systems beschrĂ€nkte sich auf die BerĂŒcksichtigung identischer Teilnehmer und spiegelte daher die RealitĂ€t nicht ausreichend gut wider. Wir fĂŒhren viele Simulationen mit unterschiedlichen Eingabewerten durch und wenden EntscheidungsbĂ€ume auf die resultierenden Daten an. Mit den daraus resultierenden verstĂ€ndlichen Datenzusammenfassung konnten wir neue Erkenntnisse zum Verhalten der Dezentrale intelligente Netzsteuerung gewinnen. EntscheidungsbĂ€ume ermöglichen die Beschreibung des Systemverhaltens fĂŒr alle Eingabekombinationen. Manchmal ist man aber nicht daran interessiert, den gesamten Eingaberaum zu partitionieren, sondern Bereiche zu finden, die zu bestimmten Ausgabe fĂŒhren (sog. Untergruppen). Die vorhandenen Algorithmen zum Erkennen von Untergruppen erfordern normalerweise große Datenmengen, um eine stabile und genaue Ausgabe zu erzielen. Der Datenerfassungsprozess ist jedoch hĂ€ufig kostspielig. Unser Hauptbeitrag ist die Verbesserung der Untergruppenerkennung aus DatensĂ€tzen mit wenigen Beobachtungen. Die Entdeckung von Untergruppen in simulierten Daten wird als Szenarioerkennung bezeichnet. Ein hĂ€ufig verwendeter Algorithmus fĂŒr die Szenarioerkennung ist PRIM (Patient Rule Induction Method). Wir schlagen REDS (Rule Extraction for Discovering Scenarios) vor, ein neues Verfahren fĂŒr die Szenarioerkennung. FĂŒr REDS, trainieren wir zuerst ein statistisches Zwischenmodell und verwenden dieses, um eine große Menge neuer Daten fĂŒr PRIM zu erstellen. Die grundlegende statistische Intuition beschrieben wir ebenfalls. Experimente zeigen, dass REDS viel besser funktioniert als PRIM fĂŒr sich alleine: Es reduziert die Anzahl der erforderlichen SimulationslĂ€ufe um 75% im Durchschnitt. Mit simulierten Daten hat man perfekte Kenntnisse ĂŒber die Eingangsverteilung — eine Voraussetzung von REDS. Um REDS auf realen Messdaten anwendbar zu machen, haben wir es mit Stichproben aus einer geschĂ€tzten multivariate Verteilung der Daten kombiniert. Wir haben die resultierende Methode in Kombination mit verschiedenen Methoden zur Generierung von Daten experimentell evaluiert. Wir haben dies fĂŒr PRIM und BestInterval — eine weitere reprĂ€sentative Methode zur Erkennung von Untergruppen — gemacht. In den meisten FĂ€llen hat unsere Methodik die QualitĂ€t der entdeckten Untergruppen erhöht

    Assessing and improving quality of QVTo model transformations

    Get PDF
    We investigate quality improvement in QVT operational mappings (QVTo) model transformations, one of the languages defined in the OMG standard on model-to-model transformations. Two research questions are addressed. First, how can we assess quality of QVTo model transformations? Second, how can we develop higher-quality QVTo transformations? To address the first question, we utilize a bottom–up approach, starting with a broad exploratory study including QVTo expert interviews, a review of existing material, and introspection. We then formalize QVTo transformation quality into a QVTo quality model. The quality model is validated through a survey of a broader group of QVTo developers. We find that although many quality properties recognized as important for QVTo do have counterparts in general purpose languages, a number of them are specific to QVTo or model transformation languages. To address the second research question, we leverage the quality model to identify developer support tooling for QVTo. We then implemented and evaluated one of the tools, namely a code test coverage tool. In designing the tool, code coverage criteria for QVTo model transformations are also identified. The primary contributions of this paper are a QVTo quality model relevant to QVTo practitioners and an open-source code coverage tool already usable by QVTo transformation developers. Secondary contributions are a bottom–up approach to building a quality model, a validation approach leveraging developer perceptions to evaluate quality properties, code test coverage criteria for QVTo, and numerous directions for future research and tooling related to QVTo quality

    Building Transformation Networks for Consistent Evolution of Interrelated Models

    Get PDF
    Complex software systems are described with multiple artifacts, such as code, design diagrams and others. Ensuring their consistency is crucial and can be automated with transformations for pairs of artifacts. We investigate how developers can combine independently developed and reusable transformations to networks that preserve consistency between more than two artifacts. We identify synchronization, compatibility and orchestration as central challenges, and we develop approaches to solve them

    Formal transformation methods for automated fault tree generation from UML diagrams

    Get PDF
    With a growing complexity in safety critical systems, engaging Systems Engineering with System Safety Engineering as early as possible in the system life cycle becomes ever more important to ensure system safety during system development. Assessing the safety and reliability of system architectural design at the early stage of the system life cycle can bring value to system design by identifying safety issues earlier and maintaining safety traceability throughout the design phase. However, this is not a trivial task and can require upfront investment. Automated transformation from system architecture models to system safety and reliability models offers a potential solution. However, existing methods lack of formal basis. This can potentially lead to unreliable results. Without a formal basis, Fault Tree Analysis of a system, for example, even if performed concurrently with system design may not ensure all safety critical aspects of the design. [Continues.]</div
    • 

    corecore