751,465 research outputs found

    Applying model-based systems engineering in search of quality by design

    Get PDF
    2022 Spring.Includes bibliographical references.Model-Based System Engineering (MBSE) and Model-Based Engineering (MBE) techniques have been successfully introduced into the design process of many different types of systems. The application of these techniques can be reflected in the modeling of requirements, functions, behavior, and many other aspects. The modeled design provides a digital representation of a system and the supporting development data architecture and functional requirements associated with that architecture through modeling system aspects. Various levels of the system and the corresponding data architecture fidelity can be represented within MBSE environment tools. Typically, the level of fidelity is driven by crucial systems engineering constraints such as cost, schedule, performance, and quality. Systems engineering uses many methods to develop system and data architecture to provide a representative system that meets costs within schedule with sufficient quality while maintaining the customer performance needs. The most complex and elusive constraints on systems engineering are defining system requirements focusing on quality, given a certain set of system level requirements, which is the likelihood that those requirements will be correctly and accurately found in the final system design. The focus of this research will investigate specifically the Department of Defense Architecture Framework (DoDAF) in use today to establish and then assess the relationship between the system, data architecture, and requirements in terms of Quality By Design (QbD). QbD was first coined in 1992, Quality by Design: The New Steps for Planning Quality into Goods and Services [1]. This research investigates and proposes a means to: contextualize high-level quality terms within the MBSE functional area, provide an outline for a conceptual but functional quality framework as it pertains to the MBSE DoDAF, provides tailored quality metrics with improved definitions, and then tests this improved quality framework by assessing two corresponding case studies analysis evaluations within the MBSE functional area to interrogate model architectures and assess quality of system design. Developed in the early 2000s, the Department of Defense Architecture Framework (DoDAF) is still in use today, and its system description methodologies continue to impact subsequent system description approaches [2]. Two case studies were analyzed to show proposed QbD evaluation to analyze DoDAF CONOP architecture quality. The first case study addresses the analysis of DoDAF CONOP of the National Aeronautics and Space Administration (NASA) Joint Polar Satellite System (JPSS) ground system for National Oceanic and Atmospheric Administration (NOAA) satellite system with particular focus on the Stored Mission Data (SMD) mission thread. The second case study addresses the analysis of DoDAF CONOP of the Search and Rescue (SAR) navel rescue operation network System of Systems (SoS) with particular focus on the Command and Control signaling mission thread. The case studies help to demonstrate a new DoDAF Quality Conceptual Framework (DQCF) as a means to investigate quality of DoDAF architecture in depth to include the application of DoDAF standard, the UML/SysML standards, requirement architecture instantiation, as well as modularity to understand architecture reusability and complexity. By providing a renewed focus on a quality-based systems engineering process when applying the DoDAF, improved trust in the system and data architecture of the completed models can be achieved. The results of the case study analyses reveal how a quality-focused systems engineering process can be used during development to provide a product design that better meets the customer's intent and ultimately provides the potential for the best quality product

    Research routing and MAC based on LEACH and S-MAC for energy efficiency and QoS in wireless sensor network

    Get PDF
    The wireless sensor is a micro-embedded device with weak data processing capability and small storage space. These nodes need to complete complex jobs, including data monitoring, acquisition and conversion, and data processing. Energy efficiency should be considered as one of the important aspects of the Wireless Sensor Network (WSN) throughout architecture and protocol design. At the same time, supporting Quality of Service (QoS) in WSNs is a research field, because the time-sensitive and important information is expected for the transmitting to to the sink node immediately. The thesis is supported by the projects entitled “The information and control system for preventing forest fires”, and “The Erhai information management system”, funded by the Chinese Government. Energy consumption and QoS are two main objectives of the projects. The thesis discusses the two aspects in route and Media Access Control (MAC). For energy efficiency, the research is based on Low Energy Adaptive Clustering Hierarchy (LEACH) protocol. LEACH is a benchmark clustering routing protocol which imposes upon cluster heads to complete a lot of aggregation and relay of messages to the base-station. However, there are limitations in LEACH. LEACH does not suit a wide area in clustering strategy and multi-hop routing. Moreover, routing protocols only focus on one factor, combining the clustering strategy and multi-hop routing mechanism were not considered in routing protocol for performance of network. QoS is supported by the MAC and routing protocol. Sensor MAC(S-MAC) makes the use of the periodically monitoring / sleeping mechanism, as well as collision and crosstalk avoidance mechanism. The mechanism reduces energy costs. Meanwhile, it supports good scalability and avoids the collision. However, the protocols do not take the differentiated services. For supporting QoS,A new route protocol needs to be designed and realized on embed platforms, which has WIFI mode and a Linux operation system to apply on the actual system. This research project was conducted as following the steps: A new protocol called RBLEACH is proposed to solve cluster on a widely scale based on LEACH. The area is divided into a few areas, where LEACH is improved to alter the selecting function in each area. RBLEACH creates routes selected by using a new algorithm to optimize the performance of the network. A new clustering method that has been developed to use several factors is PS-ACO-LEACH. The factors include the residual energy of the cluster head and Euclidean distances between cluster members and a cluster head. It can optimally solve fitness function and maintain a load balance in between the cluster head nodes, a cluster head and the base station. Based on the “Ant Colony” algorithm and transition of probability, a new routing protocol was created by “Pheromone” to find the optimal path of cluster heads to the base station. This protocol can reduce energy consumption of cluster heads and unbalanced energy consumption. Simulations prove that the improved protocol can enhance the performance of the network, including lifetime and energy conservation. Additionally, Multi Index Adaptive Routing Algorithm (MIA-QR) was designed based on network delay, packet loss rate and signal strength for QoS. The protocol is achieved by VC on an embedded Linux system. The MIA-QR is tested and verified by experiment and the protocol is to support QoS. Finally, an improved protocol (SMAC -SD) for wireless sensor networks is proposed, in order to solve the problem of S-MAC protocol that consider either service differentiation or ensure quality of service. According to service differentiation, SMAC-SD adopts an access mechanism based on different priorities including the adjustment of priority mechanisms of channel access probability, channel multi-request mechanisms and the configuring of waiting queues with different priorities and RTS backoff for different service, which makes the important service receive high channel access probability, ensuring the transmission quality of the important service. The simulation results show that the improved protocol is able to gain amount of important service and shortens the delay at the same time. Meanwhile, it improves the performance of the network effectivel

    A framework for effective management of condition based maintenance programs in the context of industrial development of E-Maintenance strategies

    Get PDF
    CBM (Condition Based Maintenance) solutions are increasingly present in industrial systems due to two main circumstances: rapid evolution, without precedents, in the capture and analysis of data and significant cost reduction of supporting technologies. CBM programs in industrial systems can become extremely complex, especially when considering the effective introduction of new capabilities provided by PHM (Prognostics and Health Management) and E-maintenance disciplines. In this scenario, any CBM solution involves the management of numerous technical aspects, that the maintenance manager needs to understand, in order to be implemented properly and effectively, according to the company’s strategy. This paper provides a comprehensive representation of the key components of a generic CBM solution, this is presented using a framework or supporting structure for an effective management of the CBM programs. The concept “symptom of failure”, its corresponding analysis techniques (introduced by ISO 13379-1 and linked with RCM/FMEA analysis), and other international standard for CBM open-software application development (for instance, ISO 13374 and OSA-CBM), are used in the paper for the development of the framework. An original template has been developed, adopting the formal structure of RCM analysis templates, to integrate the information of the PHM techniques used to capture the failure mode behaviour and to manage maintenance. Finally, a case study describes the framework using the referred template.Gobierno de Andalucía P11-TEP-7303 M

    Towards a generic platform for developing CSCL applications using Grid infrastructure

    Get PDF
    The goal of this paper is to explore the possibility of using CSCL component-based software under a Grid infrastructure. The merge of these technologies represents an attractive, but probably quite laborious enterprise if we consider not only the benefits but also the barriers that we have to overcome. This work presents an attempt toward this direction by developing a generic platform of CSCL components and discussing the advantages that we could obtain if we adapted it to the Grid. We then propose a means that could make this adjustment possible due to the high degree of genericity that our library component is endowed with by being based on the generic programming paradigm. Finally, an application of our library is proposed both for validating the adequacy of the platform which it is based on and for indicating the possibilities gained by using it under the Grid.Peer ReviewedPostprint (published version

    Complexity-based learning and teaching: a case study in higher education

    Get PDF
    This paper presents a learning and teaching strategy based on complexity science and explores its impacts on a higher education game design course. The strategy aimed at generating conditions fostering individual and collective learning in educational complex adaptive systems, and led the design of the course through an iterative and adaptive process informed by evidence emerging from course dynamics. The data collected indicate that collaboration was initially challenging for students, but collective learning emerged as the course developed, positively affecting individual and team performance. Even though challenged, students felt highly motivated and enjoyed working on course activities. Their perception of progress and expertise were always high, and the academic performance was on average very good. The strategy fostered collaboration and allowed students and tutors to deal with complex situations requiring adaptation

    Smart technologies for effective reconfiguration: the FASTER approach

    Get PDF
    Current and future computing systems increasingly require that their functionality stays flexible after the system is operational, in order to cope with changing user requirements and improvements in system features, i.e. changing protocols and data-coding standards, evolving demands for support of different user applications, and newly emerging applications in communication, computing and consumer electronics. Therefore, extending the functionality and the lifetime of products requires the addition of new functionality to track and satisfy the customers needs and market and technology trends. Many contemporary products along with the software part incorporate hardware accelerators for reasons of performance and power efficiency. While adaptivity of software is straightforward, adaptation of the hardware to changing requirements constitutes a challenging problem requiring delicate solutions. The FASTER (Facilitating Analysis and Synthesis Technologies for Effective Reconfiguration) project aims at introducing a complete methodology to allow designers to easily implement a system specification on a platform which includes a general purpose processor combined with multiple accelerators running on an FPGA, taking as input a high-level description and fully exploiting, both at design time and at run time, the capabilities of partial dynamic reconfiguration. The goal is that for selected application domains, the FASTER toolchain will be able to reduce the design and verification time of complex reconfigurable systems providing additional novel verification features that are not available in existing tool flows

    The EPICS Software Framework Moves from Controls to Physics

    No full text
    The Experimental Physics and Industrial Control System (EPICS), is an open-source software framework for high-performance distributed control, and is at the heart of many of the world’s large accelerators and telescopes. Recently, EPICS has undergone a major revision, with the aim of better computing supporting for the next generation of machines and analytical tools. Many new data types, such as matrices, tables, images, and statistical descriptions, plus users’ own data types, now supplement the simple scalar and waveform types of the former EPICS. New computational architectures for scientific computing have been added for high-performance data processing services and pipelining. Python and Java bindings have enabled powerful new user interfaces. The result has been that controls are now being integrated with modelling and simulation, machine learning, enterprise databases, and experiment DAQs. We introduce this new EPICS (version 7) from the perspective of accelerator physics and review early adoption cases in accelerators around the world
    corecore