13,788 research outputs found

    MEASURING THE PERFORMANCE COST OF MANUAL SYSTEM CALL DETECTIONS VIA PROCESS INSTRUMENTATION CALLBACK (PIC)

    Get PDF
    This quasi-experimental before-and-after study measured the performance impact of using Process Instrumentation Callback (PIC) to detect the use of manual system calls on the Windows operating system. The Windows Application Programming Interface (WinAPI), the impacts of system call monitoring, and the limitations of current detection mechanisms and their downsides were reviewed in-depth. Previous literature was evaluated that identified PIC as a unique solution to monitor system calls entirely from User-Mode, being able to rely on the Windows Kernel to intercept a target process. Unlike previous monitoring techniques, PIC must handle all system calls when performing analysis which requires an increase in processing. The impact on a single process was evaluated by recording CPU time, memory utilization, and clock time. Three different iterations that performed additional analysis were developed and tested to determine the cost of increased fidelity in detection. Results showed a statistically significant increase when PIC was applied in each version. However, the rate of impact was drastically reduced by restricting dynamic lookups to process initialization and the elimination of the Microsoft Debugging Engine. Future integration with existing detection mechanisms such as User-Mode hooks and Event-Tracing for Windows is encouraged and discussed

    Performance Measurement and Analysis of Large-Scale Parallel Applications on Leadership Computing Systems

    Get PDF

    Experimental analysis of computer system dependability

    Get PDF
    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance

    Integrated testing and verification system for research flight software design document

    Get PDF
    The NASA Langley Research Center is developing the MUST (Multipurpose User-oriented Software Technology) program to cut the cost of producing research flight software through a system of software support tools. The HAL/S language is the primary subject of the design. Boeing Computer Services Company (BCS) has designed an integrated verification and testing capability as part of MUST. Documentation, verification and test options are provided with special attention on real time, multiprocessing issues. The needs of the entire software production cycle have been considered, with effective management and reduced lifecycle costs as foremost goals. Capabilities have been included in the design for static detection of data flow anomalies involving communicating concurrent processes. Some types of ill formed process synchronization and deadlock also are detected statically

    The path to precision: comparison analysis of automated neural morphology reconstruction software

    Get PDF
    The differences in the shape, form and location of neurons are closely linked to their function. Being able to accurately and efficiently reconstruct neurons digitally in a three-dimensional space is necessary for the acquisition of knowledge in this research field. Automation through software helps optimise efficiency, yet manual reconstructions are often preferred. This thesis therefore aims to help standardise the research field more and facilitate communication and collaborative efforts by evaluating three software, Vaa3D, Neutube and NCTracer, in regards to the reconstruction algorithms' accuracy, efficiency, consistency and user experience with the user interface in order to deduce their advantages and shortcomings. A downloadable and executable Java program, which compares similarities between two reconstructions, and scripts were written to measure these parameters. Vaa3D had higher accuracy and a significantly lower execution time, but Neutube and NCTracer showcased more stability and consistent results. Additionally, NCTracer proved to be more intuitive to use. All software exhibited their own drawbacks, but the information presented can aid in improving the software or the development of new software surpassing prior ones
    • …
    corecore