60 research outputs found

    Applying model-based systems engineering in search of quality by design

    Get PDF
    2022 Spring.Includes bibliographical references.Model-Based System Engineering (MBSE) and Model-Based Engineering (MBE) techniques have been successfully introduced into the design process of many different types of systems. The application of these techniques can be reflected in the modeling of requirements, functions, behavior, and many other aspects. The modeled design provides a digital representation of a system and the supporting development data architecture and functional requirements associated with that architecture through modeling system aspects. Various levels of the system and the corresponding data architecture fidelity can be represented within MBSE environment tools. Typically, the level of fidelity is driven by crucial systems engineering constraints such as cost, schedule, performance, and quality. Systems engineering uses many methods to develop system and data architecture to provide a representative system that meets costs within schedule with sufficient quality while maintaining the customer performance needs. The most complex and elusive constraints on systems engineering are defining system requirements focusing on quality, given a certain set of system level requirements, which is the likelihood that those requirements will be correctly and accurately found in the final system design. The focus of this research will investigate specifically the Department of Defense Architecture Framework (DoDAF) in use today to establish and then assess the relationship between the system, data architecture, and requirements in terms of Quality By Design (QbD). QbD was first coined in 1992, Quality by Design: The New Steps for Planning Quality into Goods and Services [1]. This research investigates and proposes a means to: contextualize high-level quality terms within the MBSE functional area, provide an outline for a conceptual but functional quality framework as it pertains to the MBSE DoDAF, provides tailored quality metrics with improved definitions, and then tests this improved quality framework by assessing two corresponding case studies analysis evaluations within the MBSE functional area to interrogate model architectures and assess quality of system design. Developed in the early 2000s, the Department of Defense Architecture Framework (DoDAF) is still in use today, and its system description methodologies continue to impact subsequent system description approaches [2]. Two case studies were analyzed to show proposed QbD evaluation to analyze DoDAF CONOP architecture quality. The first case study addresses the analysis of DoDAF CONOP of the National Aeronautics and Space Administration (NASA) Joint Polar Satellite System (JPSS) ground system for National Oceanic and Atmospheric Administration (NOAA) satellite system with particular focus on the Stored Mission Data (SMD) mission thread. The second case study addresses the analysis of DoDAF CONOP of the Search and Rescue (SAR) navel rescue operation network System of Systems (SoS) with particular focus on the Command and Control signaling mission thread. The case studies help to demonstrate a new DoDAF Quality Conceptual Framework (DQCF) as a means to investigate quality of DoDAF architecture in depth to include the application of DoDAF standard, the UML/SysML standards, requirement architecture instantiation, as well as modularity to understand architecture reusability and complexity. By providing a renewed focus on a quality-based systems engineering process when applying the DoDAF, improved trust in the system and data architecture of the completed models can be achieved. The results of the case study analyses reveal how a quality-focused systems engineering process can be used during development to provide a product design that better meets the customer's intent and ultimately provides the potential for the best quality product

    A holistic model of emergency evacuations in large, complex, public occupancy buildings

    Get PDF
    Evacuations are crucial for ensuring the safety of building occupants in the event of an emergency. In large, complex, public occupancy buildings (LCPOBs) these procedures are significantly more complex than the simple withdrawal of people from a building. This thesis has developed a novel, holistic, theoretical model of emergency evacuations in LCPOBs inspired by systems safety theory. LCPOBs are integral components of complex socio-technical systems, and therefore the model describes emergency evacuations as control actions initiated in order to return the building from an unsafe state to a safe state where occupants are not at risk of harm. The emergency evacuation process itself is comprised of four aspects - the movement (of building occupants), planning and management, environmental features, and evacuee behaviour. To demonstrate its utility and applicability, the model has been employed to examine various aspects of evacuation procedures in two example LCPOBs - airport terminals, and sports stadiums. The types of emergency events initiating evacuations in these buildings were identified through a novel hazard analysis procedure, which utilised online news articles to create events databases of previous evacuations. Security and terrorism events, false alarms, and fires were found to be the most common cause of evacuations in these buildings. The management of evacuations was explored through model-based systems engineering techniques, which identified the communication methods and responsibilities of staff members managing these events. Social media posts for an active shooting event were analysed using qualitative and machine learning methods to determine their utility for situational awareness. This data source is likely not informative for this purpose, as few posts detail occupant behaviours. Finally, an experimental study on pedestrian dynamics with movement devices was conducted, which determined that walking speeds during evacuations were unaffected by evacuees dragging luggage, but those pushing pushchairs and wheelchairs will walk significantly slower.Open Acces

    A Trust-by-Design Framework for the Internet of Things

    Get PDF
    The Internet of Things (IoT) is an environment where interconnected entities can interact and can be identifiable, usable, and controllable via the Internet. However, in order to interact among them, such IoT entities must trust each other. Trust is difficult to define because it concerns different aspects and is strongly dependent on the context. For this reason, a holistic approach allowing developers to consider and implement trust in the IoT is highly desirable. Nevertheless, trust is usually considered among different IoT entities only when they have to interact among them. In fact, without considering it during the whole System Developmente Life Cycle (SDLC) there is the possibility that security issues will be raised. In fact, without a clear conception of the possible threats during the development of the IoT entity, the lack of planning can be insufficient in order to protect the IoT entity. For this reason, we believe that it is fundamental to consider trust during the whole SDLC in order to carefully plan how an IoT entity will perform trust decisions and interact with the other IoT entities. To fulfill this goal, in this thesis work, we propose a trust-by-design framework for the IoT that is composed of a K-Model and several transversal activities. On the one hand, the K-Model covers the SDLC from the need phase to the utilization phase. On the other hand, the transversal activities will be implemented differently depending on the phases. A fundamental aspect that we implement in this framework is the relationship that trust has with other related domains such as security and privacy. Thus we will also consider such domains and their characteristics in order to develop a trusted IoT entity

    Evolutionary Design Model for the design of complex engineered systems : Masdar City as a case study

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2009."September 2009." Cataloged from PDF version of thesis.Includes bibliographical references (p. 150-157).This thesis develops a framework for constructing an Evolutionary Design Model (EDM) that would enhance the design of complex systems through an efficient process. The framework proposed is generic and suggests a group of systematic methodologies that eventually lead to a fully realized and integrated design model. Within this model, complexities of the design are handled and the uncertainties of the design evolution are managed. Using the framework, vast design spaces can be searched while solutions are intelligently modified, their performance evaluated, and their results aggregated into a compatible set for design decisions. The EDM is composed of several design states as well as design evolving processes. A design state describes a design at a particular point in time and maps the system's object to the system's requirements and identifies its relation to the context in which the system will operate. A design evolving process involves many sub-processes which include formulation, decomposition, modeling, and integration. These sub-processes are not always carried out in a sequential manner, but rather a continuous move back and forth to previous and subsequent stages is expected. The resulting design model is described as an evolutionary model that moves a system's design from simple abstract states to more complex and detailed states throughout its evolution.(cont.) The framework utilizes system modeling methodologies that include both logical and mathematical modeling methods. The type of model used within the EDM's evolving processes is highly dependent on and driven by design needs of each process. As the design progresses a shift from logical models to mathematical models occurs within the EDM. Finally, a partial EDM is implemented within the context of a computational design system for Masdar city to demonstrate the application of the proposed framework.By Anas Alfaris.S.M

    Method of Information Security Risk Analysis for Virtualized System

    Get PDF
    The growth of usage of Information Technology (IT) in daily operations of enterprises causes the value and the vulnerability of information to be at the peak of interest. Moreover, distributed computing revolutionized the out-sourcing of computing functions, thus allowing flexible IT solutions. Since the concept of information goes beyond the traditional text documents, reaching manufacturing, machine control, and, to a certain extent – reasoning – it is a great responsibility to maintain appropriate information security. Information Security (IS) risk analysis and maintenance require extensive knowledge about the possessed assets as well as the technologies behind them, to recognize the threats and vulnerabilities the infrastructure is facing. A way of formal description of the infrastructure – the Enterprise Architecture (EA) – offers a multiperspective view of the whole enterprise, linking together business processes as well as the infrastructure. Several IS risk analysis solutions based on the EA exist. However, lack of methods of IS risk analysis for virtualization technologies complicates the procedure, thus leading to reduced availability of such analysis. The dissertation consists of an introduction, three main chapters and general conclusions. The first chapter introduces the problem of information security risk analysis and its’ automation. Moreover, state-of-the-art methodologies and their implementations for automated information security risk analysis are discussed. The second chapter proposes a novel method for risk analysis of virtualization components based on the most recent data, including threat classification and specification, control means and metrics of the impact. The third chapter presents an experimental evaluation of the proposed method, implementing it to the Cyber Security Modeling Language (CySeMoL) and comparing the analysis results to well-calibrated expert knowledge. It was concluded that the automation of virtualization solution risk analysis provides sufficient data for adjustment and implementation of security controls to maintain optimum security level

    Towards a Model-Centric Software Testing Life Cycle for Early and Consistent Testing Activities

    Get PDF
    The constant improvement of the available computing power nowadays enables the accomplishment of more and more complex tasks. The resulting implicit increase in the complexity of hardware and software solutions for realizing the desired functionality requires a constant improvement of the development methods used. On the one hand over the last decades the percentage of agile development practices, as well as testdriven development increases. On the other hand, this trend results in the need to reduce the complexity with suitable methods. At this point, the concept of abstraction comes into play, which manifests itself in model-based approaches such as MDSD or MBT. The thesis is motivated by the fact that the earliest possible detection and elimination of faults has a significant influence on product costs. Therefore, a holistic approach is developed in the context of model-driven development, which allows applying testing already in early phases and especially on the model artifacts, i.e. it provides a shift left of the testing activities. To comprehensively address the complexity problem, a modelcentric software testing life cycle is developed that maps the process steps and artifacts of classical testing to the model-level. Therefore, the conceptual basis is first created by putting the available model artifacts of all domains into context. In particular, structural mappings are specified across the included domain-specific model artifacts to establish a sufficient basis for all the process steps of the life cycle. Besides, a flexible metamodel including operational semantics is developed, which enables experts to carry out an abstract test execution on the modellevel. Based on this, approaches for test case management, automated test case generation, evaluation of test cases, and quality verification of test cases are developed. In the context of test case management, a mechanism is realized that enables the selection, prioritization, and reduction of Test Model artifacts usable for test case generation. I.e. a targeted set of test cases is generated satisfying quality criteria like coverage at the model-level. These quality requirements are accomplished by using a mutation-based analysis of the identified test cases, which builds on the model basis. As the last step of the model-centered software testing life cycle two approaches are presented, allowing an abstract execution of the test cases in the model context through structural analysis and a form of model interpretation concerning data flow information. All the approaches for accomplishing the problem are placed in the context of related work, as well as examined for their feasibility by of a prototypical implementation within the Architecture And Analysis Framework. Subsequently, the described approaches and their concepts are evaluated by qualitative as well as quantitative evaluation. Moreover, case studies show the practical applicability of the approach
    corecore