136,873 research outputs found
Implementing total productive maintenance in Nigerian manufacturing industries
Remarkable improvements have occurred recently in the maintenance management of physical assets and productive systems, so that less wastages of energy and resources occur. The requirement for optimal preventive maintenance using, for instance, justin-time (JIT) and total quality-management (TQM) techniques has given rise to whathas been called the total productive-maintenance (TPM) approach. This study explores the ways in which Nigerian manufacturing industries can implement TPM as a strategy and culture for improving its performance and suggests self-auditing and bench-marking as desirable prerequisites before TPM implementation
Automatic assembly design project 1968/9 :|breport of economic planning committee
Investigations into automatic assembly systems have
been carried out. The conclusions show the major
features to be considered by a company operating
the machine to assemble the contact block with regard
to machine output and financial aspects.
The machine system has been shown to be economically
viable for use under suitable conditions, but the
contact block is considered to be unsuitable for
automatic assembly.
Data for machine specification, reliability and
maintenance has been provided
On-Orbit Compressor Technology Program
A synopsis of the On-Orbit Compressor Technology Program is presented. The objective is the exploration of compressor technology applicable for use by the Space Station Fluid Management System, Space Station Propulsion System, and related on-orbit fluid transfer systems. The approach is to extend the current state-of-the-art in natural gas compressor technology to the unique requirements of high-pressure, low-flow, small, light, and low-power devices for on-orbit applications. This technology is adapted to seven on-orbit conceptual designs and one prototype is developed and tested
Recommended from our members
On the use of testability measures for dependability assessment
Program âtestabilityâ is informally, the probability that a program will fail under test if it contains at least one fault. When a dependability assessment has to be derived from the observation of a series of failure free test executions (a common need for software subject to âultra high reliabilityâ requirements), measures of testability can-in theory-be used to draw inferences on program correctness. We rigorously investigate the concept of testability and its use in dependability assessment, criticizing, and improving on, previously published results. We give a general descriptive model of program execution and testing, on which the different measures of interest can be defined. We propose a more precise definition of program testability than that given by other authors, and discuss how to increase testing effectiveness without impairing program reliability in operation. We then study the mathematics of using testability to estimate, from test results: the probability of program correctness and the probability of failures. To derive the probability of program correctness, we use a Bayesian inference procedure and argue that this is more useful than deriving a classical âconfidence levelâ. We also show that a high testability is not an unconditionally desirable property for a program. In particular, for programs complex enough that they are unlikely to be completely fault free, increasing testability may produce a program which will be less trustworthy, even after successful testin
Big Data and Reliability Applications: The Complexity Dimension
Big data features not only large volumes of data but also data with
complicated structures. Complexity imposes unique challenges in big data
analytics. Meeker and Hong (2014, Quality Engineering, pp. 102-116) provided an
extensive discussion of the opportunities and challenges in big data and
reliability, and described engineering systems that can generate big data that
can be used in reliability analysis. Meeker and Hong (2014) focused on large
scale system operating and environment data (i.e., high-frequency multivariate
time series data), and provided examples on how to link such data as covariates
to traditional reliability responses such as time to failure, time to
recurrence of events, and degradation measurements. This paper intends to
extend that discussion by focusing on how to use data with complicated
structures to do reliability analysis. Such data types include high-dimensional
sensor data, functional curve data, and image streams. We first provide a
review of recent development in those directions, and then we provide a
discussion on how analytical methods can be developed to tackle the challenging
aspects that arise from the complexity feature of big data in reliability
applications. The use of modern statistical methods such as variable selection,
functional data analysis, scalar-on-image regression, spatio-temporal data
models, and machine learning techniques will also be discussed.Comment: 28 pages, 7 figure
Acceptance Criteria for Critical Software Based on Testability Estimates and Test Results
Testability is defined as the probability that a program will fail a test, conditional on the program containing some fault. In this paper, we show that statements about the testability of a program can be more simply described in terms of assumptions on the probability distribution of the failure intensity of the program. We can thus state general acceptance conditions in clear mathematical terms using Bayesian inference. We develop two scenarios, one for software for which the reliability requirements are that the software must be completely fault-free, and another for requirements stated as an upper bound on the acceptable failure probability
Recommended from our members
Evaluating the resilience and security of boundaryless, evolving socio-technical Systems of Systems
- âŠ