91,897 research outputs found

    Computer Program Instrumentation Using Reservoir Sampling & Pin++

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)This thesis investigates techniques for improving real-time software instrumentation techniques of software systems. In particular, this thesis investigates two aspects of this real-time software instrumentation. First, this thesis investigates techniques for achieving different levels of visibility (i.e., ensuring all parts of a system are represented, or visible, in final results) into a software system without compromising software system performance. Secondly, this thesis investigates how using multi-core computing can be used to further reduce instrumentation overhead. The results of this research show that reservoir sampling can be used to reduce instrumentation overhead. Reservoir sampling at a rate of 20%, combined with parallelized disk I/O, added 34.1% additional overhead on a four-core machine, and only 9.9% additional overhead on a sixty-four core machine while also providing the desired system visibility. Additionally, this work can be used to further improve the performance of real-time distributed software instrumentation

    A real-time, portable, microcomputer-based jet engine simulator

    Get PDF
    Modern piloted flight simulators require detailed models of many aircraft components, such as the airframe, propulsion system, flight deck controls and instrumentation, as well as motion drive and visual display systems. The amount of computing power necessary to implement these systems can exceed that offered by dedicated mainframe computers. One approach to this problem is through the use of distributed computing, where parts of the simulation are assigned to computing subsystems, such as microcomputers. One such subsystem, such as microcomputers. One such subsystem, a real-time, portable, microcomputer-based jet engine simulator, is described in this paper. The simulator will be used at the NASA Ames Vertical Motion Simulator facility to perform calculations previously done on the facility's mainframe computer. The mainframe will continue to do all other system calculations and will interface to the engine simulator through analog I/0. The engine simulator hardware includes a 16-bit microcomputer and floating-point coprocessor. There is an 8 channel analog input board and an 8 channel analog output board. A model of a small turboshaft engine/control is coded in floating-point FORTRAN. The FORTRAN code and a data monitoring program run under the control of an assembly language real-time executive. The monitoring program allows the user to isplay and/or modify simulator variables on-line through a data terminal. A dual disk drive system is used for mass storage of programs and data. The CP/M-86 operating system provides file management and overall system control. The frame time for the simulator is 30 milliseconds, which includes all analog I/0 operations

    Distributed network and service architecture for future digital healthcare

    Get PDF
    According to World Health Organization (WHO), the worldwide prevalence of chronic diseases increases fast and new threats, such as Covid-19 pandemic, continue to emerge, while the aging population continues decaying the dependency ratio. These challenges will cause a huge pressure on the efficacy and cost-efficiency of healthcare systems worldwide. Thanks to the emerging technologies, such as novel medical imaging and monitoring instrumentation, and Internet of Medical Things (IoMT), more accurate and versatile patient data than ever is available for medical use. To transform the technology advancements into better outcome and improved efficiency of healthcare, seamless interoperation of the underlying key technologies needs to be ensured. Novel IoT and communication technologies, edge computing and virtualization have a major role in this transformation. In this article, we explore the combined use of these technologies for managing complex tasks of connecting patients, personnel, hospital systems, electronic health records and medical instrumentation. We summarize our joint effort of four recent scientific articles that together demonstrate the potential of the edge-cloud continuum as the base approach for providing efficient and secure distributed e-health and e-welfare services. Finally, we provide an outlook for future research needs

    Quality-aware model-driven service engineering

    Get PDF
    Service engineering and service-oriented architecture as an integration and platform technology is a recent approach to software systems integration. Quality aspects ranging from interoperability to maintainability to performance are of central importance for the integration of heterogeneous, distributed service-based systems. Architecture models can substantially influence quality attributes of the implemented software systems. Besides the benefits of explicit architectures on maintainability and reuse, architectural constraints such as styles, reference architectures and architectural patterns can influence observable software properties such as performance. Empirical performance evaluation is a process of measuring and evaluating the performance of implemented software. We present an approach for addressing the quality of services and service-based systems at the model-level in the context of model-driven service engineering. The focus on architecture-level models is a consequence of the black-box character of services

    Hijacker: Efficient static software instrumentation with applications in high performance computing: Poster paper

    Get PDF
    Static Binary Instrumentation is a technique that allows compile-time program manipulation. In particular, by relying on ad-hoc tools, the end user is able to alter the program's execution flow without affecting its overall semantic. This technique has been effectively used, e.g., to support code profiling, performance analysis, error detection, attack detection, or behavior monitoring. Nevertheless, efficiently relying on static instrumentation for producing executables which can be deployed without affecting the overall performance of the application still presents technical and methodological issues. In this paper, we present Hijacker, an open-source customizable static binary instrumentation tool which is able to alter a program's execution flow according to some user-specified rules, limiting the execution overhead due to the code snippets inserted in the original program, thus enabling for the exploitation in high performance computing. The tool is highly modular and works on an internal representation of the program which allows to perform complex instrumentation tasks efficiently, and can be additionally extended to support different instruction sets and executable formats without any need to modify the instrumentation engine. We additionally present an experimental assessment of the overhead induced by the injected code in real HPC applications. © 2013 IEEE

    A deliberative model for self-adaptation middleware using architectural dependency

    Get PDF
    A crucial prerequisite to externalized adaptation is an understanding of how components are interconnected, or more particularly how and why they depend on one another. Such dependencies can be used to provide an architectural model, which provides a reference point for externalized adaptation. In this paper, it is described how dependencies are used as a basis to systems' self-understanding and subsequent architectural reconfigurations. The approach is based on the combination of: instrumentation services, a dependency meta-model and a system controller. In particular, the latter uses self-healing repair rules (or conflict resolution strategies), based on extensible beliefs, desires and intention (EBDI) model, to reflect reconfiguration changes back to a target application under examination
    • 

    corecore