135,941 research outputs found

    Development of simulation-based testing environment for safety-critical software

    Get PDF
    Recently, a software program has been used in nuclear power plants (NPPs) to digitalize many instrumentation and control systems. To guarantee NPP safety, the reliability of the software used in safety-critical instrumentation and control systems must be quantified and verified with proper test cases and test environment. In this study, a software testing method using a simulation-based software test bed is proposed. The test bed is developed by emulating the microprocessor architecture of the programmable logic controller used in NPP safety-critical applications and capturing its behavior at each machine instruction. The effectiveness of the proposed method is demonstrated via a case study. To represent the possible states of software input and the internal variables that contribute to generating a dedicated safety signal, the software test cases are developed in consideration of the digital characteristics of the target system and the plant dynamics. The method provides a practical way to conduct exhaustive software testing, which can prove the software to be error free and minimize the uncertainty in software reliability quantification. Compared with existing testing methods, it can effectively reduce the software testing effort by emulating the programmable logic controller behavior at the machine level

    The Calibration and Testing of the G-BWTP Montgomerie Gyroplane Instrumentation. Aero Dept Int. Rep No. 9823

    Get PDF
    In the following report a quantitative description will be given of the calibration and testing for the instrumentation of the G-BWTP Montgomerie gyroplane. This light aircraft together with the instrumentation package has been acquired by the department in order to enhance the research in the field of rotorcraft flight dynamics. The gyroplane is due to be flight tested within the next months providing the opportunity to acquire data unique in the rotorcraft field. The intention of this report is to illustrate the way in which parameters relating to the sensor characteristics, such as the calibration constants, were derived and how the sensors themselves were tested using a well established software package. A presentation will also be given of the design of the full software program to be used for the data acquisition and analysis. The key objective of the report is to provide a reference on the way in which instrumentation is set up for the flight testing of a light gyroplane

    An automatic maintenance system for nuclear power plants instrumentation

    Get PDF
    Maintenance and testing of reactor protection systems is an important cause of unplanned reactor trips due to be commonly carried out in manual mode. The execution of surveillance procedures in this mode entails a great number of manual operations. Automated testing is the answer because it minimises test times and reduces the risk of human errors. GAMA-I is an automatic system for testing the reactor protection instrumentation which is based on VXI instrumentation cards. This system has important advantages over previous ones in terms of easiness to carry out software modifications related to configuration changes in the protection system. The system uses visual programming and the modifications can be implemented by ordinary instrumentation specialists without programming experience.The system has been developed at the Vandellos II Nuclear Power Plant by the I&C groups of Vandellos II, Tecnatom S.A. and ENWESA Servicios S.A. The representation for this project is held by the Spanish Association for the Nuclear Technologic Development (DTN). Financial support for this research was provided by the Electrical and Electronic Research Program (PIE-OCIDE) of the Spanish Ministry of Industry

    Pulse Code Modulation (PCM) data storage and analysis using a microcomputer

    Get PDF
    The current widespread use of microcomputers has led to the creation of some very low-cost instrumentation. A Pulse Code Modulation (PCM) storage device/data analyzer -- a peripheral plug-in board especially constructed to enable a personal computer to store and analyze data from a PCM source -- was designed and built for use on the NASA Sounding Rocket Program for PMC encoder configuration and testing. This board and custom-written software turns a computer into a snapshot PCM decommutator which will accept and store many hundreds or thousands of PCM telemetry data frames, then sift through them repeatedly. These data can be converted to any number base and displayed, examined for any bit dropouts or changes (in particular, words or frames), graphically plotted, or statistically analyzed

    Time-Aware Dynamic Binary Instrumentation

    Get PDF
    The complexity of modern software systems has been rapidly increasing. Program debugging and testing are essential to ensure the correctness of such systems. Program analysis is critical for understanding system’s behavior and analyzing performance. Many program analysis tools use instrumentation to extract required information at run time. Instrumentation naturally alters a program’s timing properties and causes perturbation to the program under analysis. Soft real-time systems must fulfill timing constraints. Missing deadlines in a soft real-time system causes performance degradation. Thus, time-sensitive systems require specialized program analysis tools. Time-aware instrumentation preserves the logical correctness of a program and respects its timing constraints. Current approaches for time-aware instrumentation rely on static source-code instrumentation techniques. While these approaches are sound and effective, the need for running worst-case execution time (WCET) analysis pre- and post-instrumentation reduces the applicability to only hard real-time systems where WCET analysis is common. They become impractical beyond microcontroller code for instrumenting large programs along with all their library dependencies. In this thesis, we introduce theory, method, and tools for time-aware dynamic instrumentation realized in DIME tool. DIME is a time-aware dynamic binary instrumentation framework that adds an adjustable bound on the timing overhead to the program under analysis. DIME also attempts to increase instrumentation coverage by ignoring redundant tracing information. We study parameter tuning of DIME to minimize runtime overhead and maximize instrumentation coverage. Finally, we propose a method and a tool to instrument software systems with quality of service (QoS) requirements. In this case, DIME collects QoS feedback from the system under analysis to respect user-defined performance constraints. As a tool for instrumenting soft real-time applications, DIME is practical, scalable, and supports multi-threaded applications. We present several case studies of DIME instrumenting large and complex applications such as web servers, media players, control applications, and database management systems. DIME limits the instrumentation overhead of dynamic instrumentation while achieving a high instrumentation coverage

    Developmental Flight Test Lessons Learned from Open Architecture Software in the Mission Computer of the U.S. Navy E-2C Group II Aircraft

    Get PDF
    The Naval Air Systems Command commissioned the E-2C Hawkeye Group II Mission Computer Replacement Program and tasked Air Test and Evaluation Squadron Two-Zero and the E-2C Integrated Test Team to evaluate the integration of the form, fit, and function of the OL-698/ASQ Mission Computer Replacement (MCR) for replacement of the Litton L-304 Mission Computer in the E-2C Group II configured aircraft. As part of the life cycle support of the E-2C aircraft, the MCR configuration fields a new, more reliable Commercial-off-the-Shelf (COTS) hardware system and preserves the original software investment by emulating the existing Litton Instructional Set Architecture (LISA) legacy code. Incorporating Northrop Grumman Space Technology’s Reconfigurable Processor for Legacy Applications Code Execution (RePLACE) software re-hosting technique, the investment in the LISA software is maintained. Conducting developmental test of robust software systems, such as the MCR and its associated software, provided dramatically different challenges than traditional developmental testing. A series of lessons were learned through particular discrepancies and deficiencies discovered through the six month flight test period. Specific deficiencies illustrate where proper planning could ease the difficulties encountered in software testing. Keys to successful developmental software tests include having the appropriate personnel on the test team with the proper equipment and capability. Equally important, inadequate configuration management creates more problems than fixes. Software re-programming can provide faster fixes than traditional developmental test. The flexibility of software programming makes configuration management a challenge as multiple versions become available in a short amount of time. Multiple versions of software heighten the risk of configuration management breakdown during limited amount of available flight tests. Each re-programmed version potentially fixes targeted deficiencies, but can also cause new issues in functional areas already tested. Inherently, regression testing impacts the schedule. Software testing requires a realistic schedule that the author believes should compensate for anticipated problems. Data collection, reduction, and analysis always prove to be valuable in developmental testing. A solid instrumentation plan for data collection from all parties involved in flight tests, especially data link network tests, are critical for trouble shooting discovered deficiencies. Software testing is relatively new to the developmental test world and can be seen as the way of the future. Software upgrades lure program managers into a potentially cost effective option in the face of aging avionics systems. With realistic planning and configuration management, the cost and performance effectiveness of software upgrades and development is more likely to become realized

    Deployment and Debugging of Real-Time Applications on Multicore Architectures

    Get PDF
    It is essential to enable information extraction from software. Program tracing techniques are an example of information extraction. Program tracing extracts information from the program during execution. Tracing helps with the testing and validation of software to ensure that the software under test is correct. Information extraction is done by instrumenting the program. Logged information can be stored in dedicated logging memories or can be buffered and streamed off-chip to an external monitor. The designer inspects the trace after execution to identify potentially erroneous state information. In addition, the trace can provide the state information that serves as input to generate the erroneous output for reproducibility. Information extraction can be difficult and expensive due to the increase in size and complexity of modern software systems. For the sub-class of software systems known as real-time systems, these issues are further aggravated. This is because real-time systems demand timing guarantees in addition to functional correctness. Consequently, any instrumentation to the original program code for the purpose of information extraction may affect the temporal behaviors of the program. This perturbation of temporal behaviors can lead to the violation of timing constraints, which may bias the program execution and/or cause the program to miss its deadline. As a result, there is considerable interest in devising techniques to allow for information extraction without missing a program’s deadline that is known as time-aware instrumentation. This thesis investigates time-aware instrumentation mechanisms to instrument programs while respecting their timing constraints and functional behavior. Knowledge of the underlying hardware on which the software runs, enables the extraction of more information via the instrumentation process. Chip-multiprocessors offer a solution to the performance bottleneck on uni-processors. Providing timing guarantees for hard real-time systems, however, on chip-multiprocessors is difficult. This is because conventional communication interconnects are designed to optimize the average-case performance. Therefore, researchers propose interconnects such as the priority-aware networks to satisfy the requirements of hard real-time systems. The priority-aware interconnects, however, lack the proper analysis techniques to facilitate the deployment of real-time systems. This thesis also investigates latency and buffer space analysis techniques for pipelined communication resource models, as well as algorithms for the proper deployment of real-time applications to these platforms. The analysis techniques proposed in this thesis provide guarantees on the schedulability of real-time systems on chip-multiprocessors. These guarantees are based on reducing contention in the interconnect while simultaneously accurately computing the worst-case communication latencies. While these worst-case latencies provide bounds for computing the overall worst-case execution time of applications on chip-multiprocessors, they also provide means to assigning instrumentation budgets required by time-aware instrumentation. Leveraging these platform-specific analysis techniques for the assignment of instrumentation budgets, allows for extracting more information from the instrumentation process
    corecore