466,628 research outputs found

    Validation of highly reliable, real-time knowledge-based systems

    Get PDF
    Knowledge-based systems have the potential to greatly increase the capabilities of future aircraft and spacecraft and to significantly reduce support manpower needed for the space station and other space missions. However, a credible validation methodology must be developed before knowledge-based systems can be used for life- or mission-critical applications. Experience with conventional software has shown that the use of good software engineering techniques and static analysis tools can greatly reduce the time needed for testing and simulation of a system. Since exhaustive testing is infeasible, reliability must be built into the software during the design and implementation phases. Unfortunately, many of the software engineering techniques and tools used for conventional software are of little use in the development of knowledge-based systems. Therefore, research at Langley is focused on developing a set of guidelines, methods, and prototype validation tools for building highly reliable, knowledge-based systems. The use of a comprehensive methodology for building highly reliable, knowledge-based systems should significantly decrease the time needed for testing and simulation. A proven record of delivering reliable systems at the beginning of the highly visible testing and simulation phases is crucial to the acceptance of knowledge-based systems in critical applications

    Process of designing robust, dependable, safe and secure software for medical devices: Point of care testing device as a case study

    Get PDF
    This article has been made available through the Brunel Open Access Publishing Fund.Copyright © 2013 Sivanesan Tulasidas et al. This paper presents a holistic methodology for the design of medical device software, which encompasses of a new way of eliciting requirements, system design process, security design guideline, cloud architecture design, combinatorial testing process and agile project management. The paper uses point of care diagnostics as a case study where the software and hardware must be robust, reliable to provide accurate diagnosis of diseases. As software and software intensive systems are becoming increasingly complex, the impact of failures can lead to significant property damage, or damage to the environment. Within the medical diagnostic device software domain such failures can result in misdiagnosis leading to clinical complications and in some cases death. Software faults can arise due to the interaction among the software, the hardware, third party software and the operating environment. Unanticipated environmental changes and latent coding errors lead to operation faults despite of the fact that usually a significant effort has been expended in the design, verification and validation of the software system. It is becoming increasingly more apparent that one needs to adopt different approaches, which will guarantee that a complex software system meets all safety, security, and reliability requirements, in addition to complying with standards such as IEC 62304. There are many initiatives taken to develop safety and security critical systems, at different development phases and in different contexts, ranging from infrastructure design to device design. Different approaches are implemented to design error free software for safety critical systems. By adopting the strategies and processes presented in this paper one can overcome the challenges in developing error free software for medical devices (or safety critical systems).Brunel Open Access Publishing Fund

    Estimation of Defect proneness Using Design complexity Measurements in Object- Oriented Software

    Full text link
    Software engineering is continuously facing the challenges of growing complexity of software packages and increased level of data on defects and drawbacks from software production process. This makes a clarion call for inventions and methods which can enable a more reusable, reliable, easily maintainable and high quality software systems with deeper control on software generation process. Quality and productivity are indeed the two most important parameters for controlling any industrial process. Implementation of a successful control system requires some means of measurement. Software metrics play an important role in the management aspects of the software development process such as better planning, assessment of improvements, resource allocation and reduction of unpredictability. The process involving early detection of potential problems, productivity evaluation and evaluating external quality factors such as reusability, maintainability, defect proneness and complexity are of utmost importance. Here we discuss the application of CK metrics and estimation model to predict the external quality parameters for optimizing the design process and production process for desired levels of quality. Estimation of defect-proneness in object-oriented system at design level is developed using a novel methodology where models of relationship between CK metrics and defect-proneness index is achieved. A multifunctional estimation approach captures the correlation between CK metrics and defect proneness level of software modules.Comment: 5 pages, 1 figur

    Reliability and maintainability assessment factors for reliable fault-tolerant systems

    Get PDF
    A long term goal of the NASA Langley Research Center is the development of a reliability assessment methodology of sufficient power to enable the credible comparison of the stochastic attributes of one ultrareliable system design against others. This methodology, developed over a 10 year period, is a combined analytic and simulative technique. An analytic component is the Computer Aided Reliability Estimation capability, third generation, or simply CARE III. A simulative component is the Gate Logic Software Simulator capability, or GLOSS. The numerous factors that potentially have a degrading effect on system reliability and the ways in which these factors that are peculiar to highly reliable fault tolerant systems are accounted for in credible reliability assessments. Also presented are the modeling difficulties that result from their inclusion and the ways in which CARE III and GLOSS mitigate the intractability of the heretofore unworkable mathematics

    A Pattern Language for High-Performance Computing Resilience

    Full text link
    High-performance computing systems (HPC) provide powerful capabilities for modeling, simulation, and data analytics for a broad class of computational problems. They enable extreme performance of the order of quadrillion floating-point arithmetic calculations per second by aggregating the power of millions of compute, memory, networking and storage components. With the rapidly growing scale and complexity of HPC systems for achieving even greater performance, ensuring their reliable operation in the face of system degradations and failures is a critical challenge. System fault events often lead the scientific applications to produce incorrect results, or may even cause their untimely termination. The sheer number of components in modern extreme-scale HPC systems and the complex interactions and dependencies among the hardware and software components, the applications, and the physical environment makes the design of practical solutions that support fault resilience a complex undertaking. To manage this complexity, we developed a methodology for designing HPC resilience solutions using design patterns. We codified the well-known techniques for handling faults, errors and failures that have been devised, applied and improved upon over the past three decades in the form of design patterns. In this paper, we present a pattern language to enable a structured approach to the development of HPC resilience solutions. The pattern language reveals the relations among the resilience patterns and provides the means to explore alternative techniques for handling a specific fault model that may have different efficiency and complexity characteristics. Using the pattern language enables the design and implementation of comprehensive resilience solutions as a set of interconnected resilience patterns that can be instantiated across layers of the system stack.Comment: Proceedings of the 22nd European Conference on Pattern Languages of Program

    A Case Study of the Application of the Systems Development Life Cycle (SDLC) in 21st Century Health Care: Something Old, Something New?

    Get PDF
    The systems development life cycle (SDLC), while undergoing numerous changes to its name and related components over the years, has remained a steadfast and reliable approach to software development. Although there is some debate as to the appropriate number of steps, and the naming conventions thereof, nonetheless it is a tried-and-true methodology that has withstood the test of time. This paper discusses the application of the SDLC in a 21st century health care environment. Specifically, it was utilized for the procurement of a software package designed particularly for the Home Health component of a regional hospital care facility. We found that the methodology is still as useful today as it ever was. By following the stages of the SDLC, an effective software product was identified, selected, and implemented in a real-world environment. Lessons learned from the project, and implications for practice, research, and pedagogy, are offered. Insights from this study can be applied as a pedagogical tool in a variety of classroom environments and curricula including, but not limited to, the systems analysis and design course as well as the core information systems (IS) class. It can also be used as a case study in an upper-division or graduate course describing the implementation of the SDLC in practice

    Model of Load Balancing Using Reliable Algorithm with Multi-agent System

    Get PDF
    Massive technology development is linear with the growth of internet users which increase network traffic activity. It also increases load of the system. The usage of reliable algorithm and mobile agent in distributed load balancing is a viable solution to handle the load issue on a large-scale system. Mobile agent works to collect resource information and can migrate according to given task. We propose reliable load balancing algorithm using least time first byte (LFB) combined with information from the mobile agent. In system overview, the methodology consisted of defining identification system, specification requirements, network topology and design system infrastructure. The simulation method for simulated system was using 1800 request for 10 s from the user to the server and taking the data for analysis. Software simulation was based on Apache Jmeter by observing response time and reliability of each server and then compared it with existing method. Results of performed simulation show that the LFB method with mobile agent can perform load balancing with efficient systems to all backend server without bottleneck, low risk of server overload, and reliable

    An Overview of Starfish: A Table-Centric Tool for Interactive Synthesis

    Get PDF
    Engineering is an interactive process that requires intelligent interaction at many levels. My thesis [1] advances an engineering discipline for high-level synthesis and architectural decomposition that integrates perspicuous representation, designer interaction, and mathematical rigor. Starfish, the software prototype for the design method, implements a table-centric transformation system for reorganizing control-dominated system expressions into high-level architectures. Based on the digital design derivation (DDD) system a designer-guided synthesis technique that applies correctness preserving transformations to synchronous data flow specifications expressed as co- recursive stream equations Starfish enhances user interaction and extends the reachable design space by incorporating four innovations: behavior tables, serialization tables, data refinement, and operator retiming. Behavior tables express systems of co-recursive stream equations as a table of guarded signal updates. Developers and users of the DDD system used manually constructed behavior tables to help them decide which transformations to apply and how to specify them. These design exercises produced several formally constructed hardware implementations: the FM9001 microprocessor, an SECD machine for evaluating LISP, and the SchemEngine, garbage collected machine for interpreting a byte-code representation of compiled Scheme programs. Bose and Tuna, two of DDD s developers, have subsequently commercialized the design derivation methodology at Derivation Systems, Inc. (DSI). DSI has formally derived and validated PCI bus interfaces and a Java byte-code processor; they further executed a contract to prototype SPIDER-NASA's ultra-reliable communications bus. To date, most derivations from DDD and DRS have targeted hardware due to its synchronous design paradigm. However, Starfish expressions are independent of the synchronization mechanism; there is no commitment to hardware or globally broadcast clocks. Though software back-ends for design derivation are limited to the DDD stream-interpreter, targeting synchronous or real-time software is not substantively different from targeting hardware

    An Investigation into the testing and commissioning requirements of IEC 61850 Station Bus Substations

    Get PDF
    The emergence of the new IEC 61850 standard generates a potential to deliver a safe, reliable and effective cost reduction in the way substations are designed and constructed. The IEC 61850 Station Bus systems architecture for a substation protection and automation system is based on a horizontal communication concept replicating what conventional copper wiring performed between Intelligent Electronic Devices (IED’s). The protection and control signals that are traditionally sent and received across a network of copper cables within the substation are now communicated over Ethernet based Local Area Networks (LAN) utilising Generic Object Oriented Substation Event (GOOSE) messages. Implementing a station bus system generates a substantial change to existing design and construction practices. With this significant change, it is critical to develop a methodology for testing and commissioning of protection systems using GOOSE messaging. Analysing current design standards and philosophies established a connection between current conventional practices and future practices using GOOSE messaging at a station bus level. A potential design of the GOOSE messaging protection functions was implemented using the new technology hardware and software. Identification of potential deviations from the design intent, examination of their possible causes and assessment of their consequences was achieved using a Hazard and Operability study (HAZOP). This assessment identified the parts of the intended design that required validating or verifying through the testing and commissioning process. The introduction of a test coverage matrix was developed to identify and optimise the relevant elements, settings, parameters, functions, systems and characteristics that will require validating or verifying through inspection, testing, measurement or simulations during the testing and commissioning process. Research conducted identified hardware and software that would be utilised to validate or verify the IEC 61850 system through inspection, testing, measurement or simulations. The Hazard and Operability study (HAZOP) has been identified as an effective, structured and systematic analysing process that will help identify what hardware, configurations, and functions that require testing and commissioning prior to placing a substation using IEC 61850 Station bus GOOSE messaging into service. This process enables power utilities to understand new challenges and develop testing and commissioning philosophies and quality assurance processes, while providing confidence that the IEC 61850 system will operate in a reliable, effective and secure manner

    Multi-operated HIL Test Bench for Testing Underwater Robot’s Buoyancy Variation System

    Get PDF
    Nowadays underwater gliders have become to play a vital role in ocean exploration and allow to obtain the valuable information about underwater environment. The traditional approach to the development of such vehicles requires a thorough design of each subsystem and conducting a number of expensive full scale tests for validation the accuracy of connections between these subsystems. However, present requirements to cost-effective development of underwater vehicles need the development of a reliable sampling and testing platform that allows the conducting a preliminary design of components and systems (hardware and software) of the vehicle, its simulation and finally testing and verification of missions. This paper describes the development of the HIL test bench for underwater applications. Paper discuses some advantages of HIL methodology provides a brief overview of buoyancy variation systems. In this paper we focused on hydraulic part of the developed test bench and its architecture, environment and tools. Some obtained results of several buoyancy variation systems testing are described in this paper. These results have allowed us to estimate the most efficient design of the buoyancy variation system. The main contribution of this work is to present a powerful tool for engineers to find hidden errors in underwater gliders development process and to improve the integration between glider’s subsystems by gaining insights into their operation and dynamics
    corecore