1,658 research outputs found

    Experimental analysis of computer system dependability

    Get PDF
    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance

    An Expert Systems Approach to Realtime, Active Management of a Target Resource

    Get PDF
    The application of expert systems techniques to process control domains represents a potential approach to managing the increasing complexity and dynamics which characterizes many process control environments. This thesis reports on one such application in a complex, multi-agent environment, with an eye toward generalization to other process control domains. The application concerns the automation of large computing system operation. The requirement for high availability, high performance, computing systems has created a demand for fast, consistent, expert quality response to operational problems, and effective, flexible automation of computer operations would satisfy this demand while improving the productivity of operations. However, like many process control environments, the computer operations environment is characterized by high complexity and frequent change, rendering it difficult to automate operations in traditional procedural software. These are among the characteristics which motivate an expert systems approach to automation. JESQ, the focus of this thesis, is a realtime expert system which continuously monitors the level of operating system queue space in a large computing system and takes corrective action as queue space diminishes. JESQ is one of several expert systems which comprise a system called Yorktown Expert System/MVS Manager (YES/MVS). YES/MVS automates many tasks in the domain of computer operations, and is among the first expert systems designed for continuous execution in realtime. The expert system is currently running at the IBM Thomas J. Watson Research Center, and has received a favorable response from operations staff. The thesis concentrates on several related issues. The requirements which distinguish continuous realtime expert systems that exert active control over their environments from more conventional session-oriented expert systems are identified, and strategies for meeting these requirements are described. An alternative methodology for managing large computing installations is presented. The problems of developing and testing a realtime expert system in an industrial environment are described

    Introduction of the UNIX International Performance Management Work Group

    Get PDF
    In this paper we presented the planned direction of the UNIX International Performance Management Work Group. This group consists of concerned system developers and users who have organized to synthesize recommendations for standard UNIX performance management subsystem interfaces and architectures. The purpose of these recommendations is to provide a core set of performance management functions and these functions can be used to build tools by hardware system developers, vertical application software developers, and performance application software developers

    Stand-alone wearable system for ubiquitous real-time monitoring of muscle activation potentials

    Get PDF
    Wearable technology is attracting most attention in healthcare for the acquisition of physiological signals. We propose a stand-alone wearable surface ElectroMyoGraphy (sEMG) system for monitoring the muscle activity in real time. With respect to other wearable sEMG devices, the proposed system includes circuits for detecting the muscle activation potentials and it embeds the complete real-time data processing, without using any external device. The system is optimized with respect to power consumption, with a measured battery life that allows for monitoring the activity during the day. Thanks to its compactness and energy autonomy, it can be used outdoor and it provides a pathway to valuable diagnostic data sets for patients during their own day-life. Our system has performances that are comparable to state-of-art wired equipment in the detection of muscle contractions with the advantage of being wearable, compact, and ubiquitous

    An Application of Large Scale Computing Facilities to the Processing of LANDSAT Digital Data in Australia

    Get PDF
    An early issue in the Australian Wide Scale Wheat Monitoring Project, started in November, 1978, was whether to use area sampling, as in the LACIE, or to be innovative and attempt whole scene processing. The availability of a large computing system and acknowledgment of the trends in price and performance of computers influenced a decision towards whole scene processing. The computing facilities used in this project are described. An interactive facility supported by software called ER-MAN II is installed on an IBM 3033 which simultaneously supports several hundred other interactive users. The pros and cons of using such a shared facility for this type of work are explored. The use of multi-temporal data has been the essence of the approach in this project. Reasons for its use, and its performance implications are discussed from the computing view point. Results to date indicate that shared use of a large facility is feasible and effective. In addition, some calculations may not be possible on small CPU\u27s. While the interactive processing of the combination of multi-temporal LANDSAT data and large areas is not common in Australia now, it is probable that its use will increase as the cost of computing equipment decreases

    Report on how EIONET and EEA can contribute to the urban in situ requirements of a future Copernicus anthropogenic CO2 observing system

    Get PDF
    This report provides a technical review of CO2 and CH4 emissions monitoring methods based on surface mixing ratio measurements, total column mixing ratio measurements and flux measurements. The review demonstrated that all these measurements would fulfil respective in situ requirements of the Copernicus CO2 MVS capacity, contributing to the validation of space observations in and around cities and/or the system’s city-scale emissions estimates. The review furthermore elaborated on the benefits to climate change mitigation monitoring in the respective cities and how these methods could be implemented to monitor local emissions.Negotiated procedure No EEA/IDM/R0/17/008. Services supporting the European Environment Agency’s (EEA) crosscutting coordination of the Copernicus In Situ Componen

    Combining thermal imaging with photogrammetry of an active volcano using UAV : an example from Stromboli, Italy

    Get PDF
    The authors would like to thank the Istituto Nazionale di Geofisica e Vulcanologia – Sezione di Catania (INGV‐CT) for granting permission to conduct the UAV surveys over the Stromboli volcano. This work was supported by the School for Early Career Researchers at the University of Aberdeen, UK. Dougal Jerram is partly funded through a Norwegian Research Council Centres of Excellence project (project number 223272, CEED). The team would like to thank Angelo Cristaudo for logistical help during the fieldwork efforts on Stromboli.Peer reviewedPostprin

    Stability Analysis of a Landslide Scarp by Means of Virtual Outcrops: The Mt. Peron Niche Area (Masiere di Vedana Rock Avalanche, Eastern Southern Alps)

    Get PDF
    We investigated the Mt. Peron niche area of the Masiere di Vedana rock avalanche (BL), one of the major mass movements that affected the Eastern Southern Alps in historical times. So far, a geomechanical characterization and a stability analysis of the niche area, where potential rockfall sources are present, are lacking. The Mt. Peron niche area is a rocky cliff almost inaccessible to field-based measurements. In order to overcome this issue, we performed a geo-structural characterization of a sector of the cliff by means of a UAV-based photogrammetric survey. From the virtual outcrop, we extracted the orientation of 159 fractures that were divided into sets based on a K-means clustering algorithm and field-checked with some measurements collected along a rappelling descent route down to the cliff. Finally, with the aim of evaluating the stability of the volume under investigation, we performed a stability analysis of three rock pillars included in our survey by means of a distinct element numerical simulation. Our results indicate that two out of the three pillars are characterized by a stable state, under the simulation assumptions, whereas the third is close to failure, and for this reason, its condition needs further investigation
    corecore