50 research outputs found

    Synthesis of behavioral models from scenarios

    No full text

    Executable Model Synthesis and Property Validation for Message Sequence Chart Specifications

    Get PDF
    Message sequence charts (MSCā€™s) are a formal language for the speciļ¬cation of scenarios in concurrent real-time systems. The thesis addresses the synthesis of executable object-oriented design-time models from MSC speciļ¬cations. The synthesis integrates with the software development process, its purpose being to automatically create working prototypes from speciļ¬cations without error and create executable models on which properties may be validated. The usefulness of existing algorithms for the synthesis of ROOM (Real-Time Object Oriented Modeling) models from MSCā€™s has been evaluated from the perspective of an applications programmer ac-cording to various criteria. A number of new synthesis features have been proposed to address them, and applied to a telephony call management system for illustration. These include the speciļ¬cation and construction of hierarchical structure and behavior of ROOM actors, views, multiple containment, replication, resolution of non-determinism and automatic coordination. Generalizations and algorithms have been provided. The hierarchical actor structure, replication, FSM merging, and global coordinator algorithms have been implemented in the Mesa CASE tool. A comparison is made to other speciļ¬cation and modeling languages and their synthesis, such as SDL, LSCā€™s, and statecharts. Another application of synthesis is to generate a model with support for the automated validation of safety and liveness properties. The Mobility Management services of the GSM digital mobile telecommunications system were speciļ¬ed in MSCā€™s. A Promela model of the system was then synthesized. A number of optimizations have been proposed to reduce the complexity of the model in order to successfully perform a validation of it. Properties of the system were encoded in Linear Temporal Logic, and the Promela model was used to automatically validate a number of identiļ¬ed properties using the model checker Spin. A ROOM model was then synthesized from the validated MSC speciļ¬cation using the proposed reļ¬nement features

    Rigorous object-oriented analysis

    Get PDF
    Object-oriented methods for analysis, design and programming are commonly used by software engineers. Formal description techniques, however, are mainly used in a research environment. We have investigated how rigour can be introduced into the analysis phase of the software development process by combining object-oriented analysis (OOA) methods with formal description techniques. The main topics of this investigation are a formal interpretation of the OOA constructs using LOTOS, a mathematical definition of the basic OOA concepts using a simple denotational semantics and a new method for object- oriented analysis that we call the Rigorous Object-Oriented Analysis method (ROOA). The LOTOS interpretation of the OOA concepts is an intrinsic part of the ROOA method. It was designed in such a way that software engineers with no experience in LOTOS, can still use ROOA. The denotational semantics of the concepts of object-oriented analysis illuminates the formal syntactic transformations within ROOA and guarantees that the basic object- oriented concepts can be understood independently of the specification language we use. The ROOA method starts from a set of informal requirements and an object model and produces a formal object-oriented analysis model that acts as a requirements specification. The resulting formal model integrates the static, dynamic and functional properties of a system in contrast to existing OOA methods which are informal and produce three separate models that are difficult to integrate and keep consistent. ROOA provides a systematic development process, by proposing a set of rules to be followed during the analysis phase. During the application of these rules, auxiliary structures are created to help in tracing the requirements through to the final formal model. As LOTOS produces executable specifications, prototyping can be used to check the conformance of the specification against the original requirements and to detect inconsistencies, omissions and ambiguities early in the development process

    Model-Based Systems Engineering Approach to Distributed and Hybrid Simulation Systems

    Get PDF
    INCOSE defines Model-Based Systems Engineering (MBSE) as the formalized application of modeling to support system requirements, design, analysis, verification, and validation activities beginning in the conceptual design phase and continuing throughout development and later life cycle phases. One very important development is the utilization of MBSE to develop distributed and hybrid (discrete-continuous) simulation modeling systems. MBSE can help to describe the systems to be modeled and help make the right decisions and partitions to tame complexity. The ability to embrace conceptual modeling and interoperability techniques during systems specification and design presents a great advantage in distributed and hybrid simulation systems development efforts. Our research is aimed at the definition of a methodological framework that uses MBSE languages, methods and tools for the development of these simulation systems. A model-based composition approach is defined at the initial steps to identify distributed systems interoperability requirements and hybrid simulation systems characteristics. Guidelines are developed to adopt simulation interoperability standards and conceptual modeling techniques using MBSE methods and tools. Domain specific system complexity and behavior can be captured with model-based approaches during the system architecture and functional design requirements definition. MBSE can allow simulation engineers to formally model different aspects of a problem ranging from architectures to corresponding behavioral analysis, to functional decompositions and user requirements (Jobe, 2008)

    Facilitating the modelling and automated analysis of cryptographic protocols

    Get PDF
    Includes bibliographical references.Multi-dimensional security protocol engineering is effective for creating cryptographic protocols since it encompasses a variety of design, analysis and deployment techniques, thereby providing a higher level of confidence than individual approaches. SPEAR II, the Security Protocol Engineering and Analysis Resource n, is a protocol engineering tool built on the foundation of previous experience garnered during the SPEAR I project in 1997. The goal of the SPEAR II tool is to facilitate cryptographic protocol engineering and aid users in distilling the critical issues during an engineering session by presenting them with an appropriate level of detail and guiding them as much as possible. The SPEAR II tool currently consists of four components that have been created as part of this dissertation and integrated into one consistent and unified graphical interface: a protocol specification environment (GYPSIE), a GNY statement construction interface (Visual GNY), a Prolog-based GNY analysis engine (GYNGER) and a message rounds calculator

    MoMA: Momentum Contrastive Learning with Multi-head Attention-based Knowledge Distillation for Histopathology Image Analysis

    Full text link
    There is no doubt that advanced artificial intelligence models and high quality data are the keys to success in developing computational pathology tools. Although the overall volume of pathology data keeps increasing, a lack of quality data is a common issue when it comes to a specific task due to several reasons including privacy and ethical issues with patient data. In this work, we propose to exploit knowledge distillation, i.e., utilize the existing model to learn a new, target model, to overcome such issues in computational pathology. Specifically, we employ a student-teacher framework to learn a target model from a pre-trained, teacher model without direct access to source data and distill relevant knowledge via momentum contrastive learning with multi-head attention mechanism, which provides consistent and context-aware feature representations. This enables the target model to assimilate informative representations of the teacher model while seamlessly adapting to the unique nuances of the target data. The proposed method is rigorously evaluated across different scenarios where the teacher model was trained on the same, relevant, and irrelevant classification tasks with the target model. Experimental results demonstrate the accuracy and robustness of our approach in transferring knowledge to different domains and tasks, outperforming other related methods. Moreover, the results provide a guideline on the learning strategy for different types of tasks and scenarios in computational pathology. Code is available at: \url{https://github.com/trinhvg/MoMA}.Comment: Preprin

    Assistance in Model Driven Development: Toward an Automated Transformation Design Process

    Get PDF
    Model driven engineering aims to shorten the development cycle by focusing on abstractions and partially automating code generation. We long lived in the myth of automatic Model Driven Development (MDD) with promising approaches, techniques, and tools. Describing models should be a main concern in software development as well as model verification and model transformation to get running applications from high level models. We revisit the subject of MDD through the prism of experimentation and open mindness. In this article, we explore assistance for the stepwise transition from the model to the code to reduce the time between the analysis model and implementation. The current state of practice requires methods and tools. We provide a general process and detailed transformation specifications where reverse-engineering may play its part. We advocate a model transformation approach in which transformations remain simple, the complexity lies in the process of transformation that is adaptable and configurable. We demonstrate the usefulness, and scalability of our proposed MDD process by conducting experiments. We conduct experiments within a simple case study in software automation systems. It is both representative and scalable. The models are written in UML; the transformations are implemented mainly using ATL, and the programs are deployed on Android and Lego EV3. Last we report the lessons learned from experimentation for future community work

    COMPONENT TECHNOLOGIES AND THEIR IMPACT UPON SOFTWARE DEVELOPMENT

    Get PDF
    Software development is beset with problems relating to development productivity, resulting in projects delivered late and over budget. While the term software engineering was first introduced in the late sixties, its current state reflects no other engineering discipline. Component-orientation has been proposed as a technique to address the problems of development productivity and much industrial literature extols the benefits of a component-oriented approach to software development. This research programme assesses the use of component technologies within industrial software development. From this assessment, consideration is given to how organisations can best adopt such techniques. Initial work focuses upon the nature of component-orientation, drawing from the considerable body of industrial literature in the area. Conventional wisdom regarding componentorientation is identified from the review. Academic literature relevant to the research programme focuses upon knowledge regarding the assessment of software technologies and models for the adoption of emergent technologies. The method pays particular attention to literature concerning practitioner focussed research, in particular case studies. The application of the case study method is demonstrated. The study of two industrial software development projects enables an examination of specific propositions related to the effect of using component technologies. Each case study is presented, and the impact of component-orientation is each case is demonstrated. Theories regarding the impact of component technologies upon software development are drawn from case study results. These theories are validated through a survey of practitioners. This enabled further examination of experience in component-based development and also understanding how developers learn about the techniques. A strategy for the transfer of research findings into organisational knowledge focuses upon the packaging of previous experience in the use of component-orientation in such a way that it was usable by other developers. This strategy returns to adoption theories in light of the research findings and identifies a pattern-based approach as the most suitable for the research aims. A pattern language, placed in the context of the research programme, is developed from this strategy. Research demonstrates that component-orientation undoubtedly does affect the development process, and it is necessary to challenge conventional wisdom regarding their use. While component-orientation provides the mechanisms for increased productivity in software development, these benefits cannot be exploited without a sound knowledge base around the domain
    corecore