116 research outputs found

    Decomposability in formal conformance testing

    Get PDF
    We study the problem of deriving a specification for a third-party component, based on the specification of the system and the environment in which the component is supposed to reside. Particularly, we are interested in using component specifications for conformance testing of black-box components, using the theory of input-output conformance (ioco) testing. We propose and prove sufficient criteria for decompositionality, i.e., that components conforming to the derived specification will always compose to produce a correct system with respect to the system specification. We also study the criteria for strong decomposability, by which we can ensure that only those components conforming to the derived specification can lead to a correct system

    Evolution specification evaluation in industrial MDSE ecosystems

    Get PDF
    Domain-specific languages (DSLs) allow users to model systems using concepts from a specific domain. Evolution of DSLs triggers co-evolution of models developed in these languages. When the number of models that needs to co-evolve increases, so does the required effort to do so. This is called the co-evolution problem. We have investigated the extent of the co-evolution problem at ASML [1], provider of lithography equipment for the semiconductor industry. Here we have described the structure and evolution of a large-scale ecosystem of DSLs. We have observed that due to the large number of artifacts that require coevolutionary activity, manual solutions have become unfeasible, and an automated approach is required. A popular approach for automating co-evolution is the operator-based approach. In this paper we have evaluated the operator-based approach on a large-scale industrial case-study of twenty-two DSLs and 95 model-to-model transformations with a revision history of over three years, and have revealed deficiencies in existing operator libraries. To address these deficiencies we have presented a topdown methodology to derive a complete set of operators

    Categorizing Non-Functional Requirements Using a Hierarchy in UML.

    Get PDF
    Non-functional requirements (NFRs) are a subset of requirements, the means by which software system developers and clients communicate about the functionality of the system to be built. This paper has three main parts: first, an overview of how non-functional requirements relate to software engineering is given, along with a survey of NFRs in the software engineering literature. Second, a collection of 161 NFRs is diagrammed using the Unified Modelling Language, forming a tool with which developers may more easily identify and write additional NFRs. Third, a lesson plan is presented, a learning module intended for an undergraduate software engineering curriculum. The results of presenting this learning module to a class in Spring, 2003 is presented

    Sustainability of systems interoperability in dynamic business networks

    Get PDF
    Dissertação para obtenção do Grau de Doutor em Engenharia Electrotécnica e de ComputadoresCollaborative networked environments emerged with the spread of the internet, contributing to overcome past communication barriers, and identifying interoperability as an essential property to support businesses development. When achieved seamlessly, efficiency is increased in the entire product life cycle support. However, due to the different sources of knowledge, models and semantics, enterprise organisations are experiencing difficulties exchanging critical information, even when they operate in the same business environments. To solve this issue, most of them try to attain interoperability by establishing peer-to-peer mappings with different business partners, or use neutral data and product standards as the core for information sharing, in optimized networks. In current industrial practice, the model mappings that regulate enterprise communications are only defined once, and most of them are hardcoded in the information systems. This solution has been effective and sufficient for static environments, where enterprise and product models are valid for decades. However, more and more enterprise systems are becoming dynamic, adapting and looking forward to meet further requirements; a trend that is causing new interoperability disturbances and efficiency reduction on existing partnerships. Enterprise Interoperability (EI) is a well established area of applied research, studying these problems, and proposing novel approaches and solutions. This PhD work contributes to that research considering enterprises as complex and adaptive systems, swayed to factors that are making interoperability difficult to sustain over time. The analysis of complexity as a neighbouring scientific domain, in which features of interoperability can be identified and evaluated as a benchmark for developing a new foundation of EI, is here proposed. This approach envisages at drawing concepts from complexity science to analyse dynamic enterprise networks and proposes a framework for sustaining systems interoperability, enabling different organisations to evolve at their own pace, answering the upcoming requirements but minimizing the negative impact these changes can have on their business environment

    Interface Evaluation for Open System Architectures

    Get PDF
    This research develops a deterministic interface evaluation framework (IEF) in support of the principles identified in the Modular Open Systems Approach (MOSA). Interface evaluation in weapon system development requires a Decision Analysis (DA) method capable of handling a continuously growing alternative set and functioning with limited availability of senior decision makers. Value Focused Thinking (VFT) is selected as the best method for addressing the parameters of the framework. Inputs from the Medium Altitude Unmanned Aircraft System program office are used. An initial value threshold is established to guide open interface decisions, based on assessments of 15 historical decision scenarios. Open interface recommendations for the 15 scenarios are compared to previous program decisions, where the model supports past decisions for 5 of 15 scenarios. A sensitivity analysis is then conducted to examine the robustness of the framework to changing weights for cost, schedule, and performance, and the threshold for an open implementation decision. This evaluation framework provides a repeatable method for key interface evaluation that reflects the values of DoD acquisition leadership and the Open System Joint Task Force (OSJTF)

    Relationship between Simulink and Petri Nets

    Full text link

    On the random structure of behavioural transition systems

    Get PDF
    Random graphs have the property that they are very predictable. Even by exploring a small part reliable observations are possible regarding their structure and size. An unfortunate observation is that standard models for random graphs, such as the Erdös-Rényi model, do not reflect the structure of the graphs that we find in behavioural modelling. In this paper we propose an alternative model, which we show to be a better reflection of ‘real’ state spaces. We show how we can use this structure to predict the size of state spaces, and we show that in this model software bugs are much easier to find than in the more standard random graph models. Not only gives this theoretical evidence that testing might be more effective than thought by some, but it also gives means to quantify the amount of residual errors based on a limited number of test runs

    Explainable digital forensics AI: Towards mitigating distrust in AI-based digital forensics analysis using interpretable models

    Get PDF
    The present level of skepticism expressed by courts, legal practitioners, and the general public over Artificial Intelligence (AI) based digital evidence extraction techniques has been observed, and understandably so. Concerns have been raised about closed-box AI models’ transparency and their suitability for use in digital evidence mining. While AI models are firmly rooted in mathematical, statistical, and computational theories, the argument has centered on their explainability and understandability, particularly in terms of how they arrive at certain conclusions. This paper examines the issues with closed-box models; the goals; and methods of explainability/interpretability. Most importantly, recommendations for interpretable AI-based digital forensics (DF) investigation are proposed
    • …
    corecore