201 research outputs found

    DECISION SUPPORT MODEL IN FAILURE-BASED COMPUTERIZED MAINTENANCE MANAGEMENT SYSTEM FOR SMALL AND MEDIUM INDUSTRIES

    Get PDF
    Maintenance decision support system is crucial to ensure maintainability and reliability of equipments in production lines. This thesis investigates a few decision support models to aid maintenance management activities in small and medium industries. In order to improve the reliability of resources in production lines, this study introduces a conceptual framework to be used in failure-based maintenance. Maintenance strategies are identified using the Decision-Making Grid model, based on two important factors, including the machines’ downtimes and their frequency of failures. The machines are categorized into three downtime criterions and frequency of failures, which are high, medium and low. This research derived a formula based on maintenance cost, to re-position the machines prior to Decision-Making Grid analysis. Subsequently, the formula on clustering analysis in the Decision-Making Grid model is improved to solve multiple-criteria problem. This research work also introduced a formula to estimate contractor’s response and repair time. The estimates are used as input parameters in the Analytical Hierarchy Process model. The decisions were synthesized using models based on the contractors’ technical skills such as experience in maintenance, skill to diagnose machines and ability to take prompt action during troubleshooting activities. Another important criteria considered in the Analytical Hierarchy Process is the business principles of the contractors, which includes the maintenance quality, tools, equipments and enthusiasm in problem-solving. The raw data collected through observation, interviews and surveys in the case studies to understand some risk factors in small and medium food processing industries. The risk factors are analysed with the Ishikawa Fishbone diagram to reveal delay time in machinery maintenance. The experimental studies are conducted using maintenance records in food processing industries. The Decision Making Grid model can detect the top ten worst production machines on the production lines. The Analytical Hierarchy Process model is used to rank the contractors and their best maintenance practice. This research recommends displaying the results on the production’s indicator boards and implements the strategies on the production shop floor. The proposed models can be used by decision makers to identify maintenance strategies and enhance competitiveness among contractors in failure-based maintenance. The models can be programmed as decision support sub-procedures in computerized maintenance management systems

    Dynamic temporary blood facility location-allocation during and post-disaster periods

    Get PDF
    The key objective of this study is to develop a tool (hybridization or integration of different techniques) for locating the temporary blood banks during and post-disaster conditions that could serve the hospitals with minimum response time. We have used temporary blood centers, which must be located in such a way that it is able to serve the demand of hospitals in nearby region within a shorter duration. We are locating the temporary blood centres for which we are minimizing the maximum distance with hospitals. We have used Tabu search heuristic method to calculate the optimal number of temporary blood centres considering cost components. In addition, we employ Bayesian belief network to prioritize the factors for locating the temporary blood facilities. Workability of our model and methodology is illustrated using a case study including blood centres and hospitals surrounding Jamshedpur city. Our results shows that at-least 6 temporary blood facilities are required to satisfy the demand of blood during and post-disaster periods in Jamshedpur. The results also show that that past disaster conditions, response time and convenience for access are the most important factors for locating the temporary blood facilities during and post-disaster periods

    Study on New Sampling Plans and Optimal Integration with Proactive Maintenance in Production Systems

    Get PDF
    Sampling plans are statistical process control (SPC) tools used mainly in production processes. They are employed to control processes by monitoring the quality of produced products and alerting for necessary adjustments or maintenance. Sampling is used when an undesirable change (shift) in a process is unobservable and needs time to discover. Basically, the shift occurs when an assignable cause affects the process. Wrong setups, defective raw materials, degraded components are examples of assignable causes. The assignable cause causes a variable (or attribute) quality characteristic to shift from the desired state to an undesired state. The main concern of sampling is to observe a process shift quickly by signaling a true alarm, at which, maintenance is performed to restore the process to its normal operating conditions. While responsive maintenance is performed if a shift is detected, proactive maintenance such as age-replacement is integrated with the design of sampling. A sampling plan is designed economically or economically-statistically. An economical design does not assess the system performance, whereas the economic-statistical design includes constraints on system performance such as the average outgoing quality and the effective production rate. The objective of this dissertation is to study sampling plans by attributes. Two studies are conducted in this dissertation. In the first study, a sampling model is developed for attribute inspection in a multistage system with multiple assignable causes that could propagate downstream. In the second study, an integrated model of sampling and maintenance with maintenance at the time of the false alarm is proposed. Most of the sampling plans are designed based on the occurrence of one assignable cause. Therefore, a sampling plan that allows two assignable causes to occur is developed in the first study. A multistage serial system of two unreliable machines with one assignable cause that could occur on each machine is assumed where the joint occurrence of assignable causes propagates the process\u27s shift to a higher value. As a result, the system state at any time is described by one in-control and three out-of-control states where the evolution from a state to another depends on the competencies between shifts. A stochastic methodology to model all competing scenarios is developed. This methodology forms a base that could be used if the number of machines and/or states increase. In the second study, an integrated model of sampling and scheduled maintenance is proposed. In addition to the two opportunities for maintenance at the true alarm and scheduled maintenance, an additional opportunity for preventive maintenance at the time of a false alarm is suggested. Since a false alarm could occur at any sampling time, preventive maintenance is assumed to increase with time. The effectiveness of the proposed model is compared to the effectiveness of separate models of scheduled maintenance and sampling. Inspired by the conducted studies, different topics of sampling and maintenance are proposed for future research. Two topics are suggested for integrating sampling with selective maintenance. The third topic is an extension of the first study where more than two shifts can occur simultaneously

    Topics in the Design of Life History Studies

    Get PDF
    Substantial investments are being made in health research to support the conduct of large cohort studies with the objective of improving understanding of the relationships between diverse features (e.g. exposure to toxins, genetic biomarkers, demographic variables) and disease incidence, progression, and mortality. Longitudinal cohort studies are commonly used to study life history processes, that is patterns of disease onset, progression, and death in a population. While primary interest often lies in estimating the effect of some factor on a simple time-to-event outcome, multistate modelling offers a convenient and powerful framework for the joint consideration of disease onset, progression, and mortality, as well as the effect of one or more covariates on these transitions. Longitudinal studies are typically very costly, and the complexity of the follow-up scheme is often not fully considered at the design stage, which may lead to inefficient allocation of study resources and/or underpowered studies. In this thesis, several aspects of study design are considered to guide the design of complex longitudinal studies, with the general aim being to obtain efficient estimates of parameters of interest subject to cost constraints. Attention is focused on a general KK state model where states 1,,K11, \ldots, K-1 represent different stages of a chronic disease and state KK is an absorbing state representing death. In Chapter 2, we propose an approach to design efficient tracing studies to mitigate the loss of information stemming from attrition, a common feature of prospective cohort studies. Our approach exploits observed information on state occupancy prior to loss-to-followup, covariates, and the time of loss-to-followup to inform the selection of individuals to be traced, leading to more judicious allocation of resources. Two settings are considered. In the first there are only constraints on the expected number of individuals to be traced, and in the second the constraints are imposed on the expected cost of tracing. In the latter, the fact that some types of data may be more costly to obtain via tracing than other types of data is dealt with. In Chapter 3, we focus on two key aspects of longitudinal cohort studies with intermittent assessments: sample size and the frequency of assessments. We derive the Fisher information as the basis for studying the interplay between these factors and to identify features of minimum-cost designs to achieve desired power. Extensions which accommodate the possibility of misclassification of disease status at the intermittent assessments times are developed. These are useful to assess the impact of imperfect screening or diagnostic tests in the longitudinal setting. In Chapter 4, attention is turned to state-dependent sampling designs for prevalent cohort studies. While incident cohorts involve recruiting individuals before they experience some event of interest (e.g. onset of a particular disease) and prospectively following them to observe this event, prevalent cohorts are obtained by recruiting individuals who have already experienced this event at some point in the past. Prevalent cohort sampling yields length-biased data which has been studied extensively in the survival setting; we demonstrate the impact of this in the multistate setting. We start with observation schemes in which data are subject to left- or right-truncation in the failure-time setting. We then generalize these findings to more complex multistate models. While the distribution of state occupancy at recruitment in a prevalent cohort sample may be driven by the prevalences in the population, we propose approaches for state-dependent sampling at the design stage to improve efficiency and/or minimize expected study cost. Finally, Chapter 5 features an overview of the key contributions of this research and outlines directions for future work

    Developing Methods of Obtaining Quality Failure Information from Complex Systems

    Get PDF
    The complexity in most engineering systems is constantly growing due to ever-increasing technological advancements. This result in a corresponding need for methods that adequately account for the reliability of such systems based on failure information from components that make up these systems. This dissertation presents an approach to validating qualitative function failure results from model abstraction details. The impact of the level of detail available to a system designer during conceptual stages of design is considered for failure space exploration in a complex system. Specifically, the study develops an efficient approach towards detailed function and behavior modeling required for complex system analyses. In addition, a comprehensive research and documentation of existing function failure analysis methodologies is also synthesized into identified structural groupings. Using simulations, known governing equations are evaluated for components and system models to study responses to faults by accounting for detailed failure scenarios, component behaviors, fault propagation paths, and overall system performance. The components were simulated at nominal states and varying degrees of fault representing actual modes of operation. Information on product design and provisions on expected working conditions of components were used in the simulations to address normally overlooked areas during installation. The results of system model simulations were investigated using clustering analysis to develop an efficient grouping method and measure of confidence for the obtained results. The intellectual merit of this work is the use of a simulation based approach in studying how generated failure scenarios reveal component fault interactions leading to a better understanding of fault propagation within design models. The information from using varying fidelity models for system analysis help in identifying models that are sufficient enough at the conceptual design stages to highlight potential faults. This will reduce resources such as cost, manpower and time spent during system design. A broader impact of the project is to help design engineers identifying critical components, quantifying risks associated with using particular components in their prototypes early in the design process and help improving fault tolerant system designs. This research looks to eventually establishing a baseline for validating and comparing theories of complex systems analysis

    Structural System Reliability: Overview of Theories and Applications to Optimization

    Get PDF
    This paper provides an overview of theories and applications of structural system reliability (SSR). The paper defines SSR problems and discusses the growing needs for SSR analysis and technical challenges. Detailed literature reviews are provided for three subtopics: SSR methods for Boolean system events, SSR methods for sequential failures, and SSR-based design/topology optimization. Discussions of each subtopic define the target problem using mathematical formulations and categorize existing SSR methods in terms of the characteristics of the problems and approaches. The paper summarizes SSR methods that are considered critical in the history and have introduced notable technological developments in recent years. In each subtopic or category, the reviewed methods are compared with each other in terms of accuracy, computational efficiency, and implementation issues to allow identifying apposite methods for SSR applications. The paper concludes with remarks on future research needs and opportunities

    Efficient resilience analysis and decision-making for complex engineering systems

    Get PDF
    Modern societies around the world are increasingly dependent on the smooth functionality of progressively more complex systems, such as infrastructure systems, digital systems like the internet, and sophisticated machinery. They form the cornerstones of our technologically advanced world and their efficiency is directly related to our well-being and the progress of society. However, these important systems are constantly exposed to a wide range of threats of natural, technological, and anthropogenic origin. The emergence of global crises such as the COVID-19 pandemic and the ongoing threat of climate change have starkly illustrated the vulnerability of these widely ramified and interdependent systems, as well as the impossibility of predicting threats entirely. The pandemic, with its widespread and unexpected impacts, demonstrated how an external shock can bring even the most advanced systems to a standstill, while the ongoing climate change continues to produce unprecedented risks to system stability and performance. These global crises underscore the need for systems that can not only withstand disruptions, but also, recover from them efficiently and rapidly. The concept of resilience and related developments encompass these requirements: analyzing, balancing, and optimizing the reliability, robustness, redundancy, adaptability, and recoverability of systems -- from both technical and economic perspectives. This cumulative dissertation, therefore, focuses on developing comprehensive and efficient tools for resilience-based analysis and decision-making of complex engineering systems. The newly developed resilience decision-making procedure is at the core of these developments. It is based on an adapted systemic risk measure, a time-dependent probabilistic resilience metric, as well as a grid search algorithm, and represents a significant innovation as it enables decision-makers to identify an optimal balance between different types of resilience-enhancing measures, taking into account monetary aspects. Increasingly, system components have significant inherent complexity, requiring them to be modeled as systems themselves. Thus, this leads to systems-of-systems with a high degree of complexity. To address this challenge, a novel methodology is derived by extending the previously introduced resilience framework to multidimensional use cases and synergistically merging it with an established concept from reliability theory, the survival signature. The new approach combines the advantages of both original components: a direct comparison of different resilience-enhancing measures from a multidimensional search space leading to an optimal trade-off in terms of system resilience, and a significant reduction in computational effort due to the separation property of the survival signature. It enables that once a subsystem structure has been computed -- a typically computational expensive process -- any characterization of the probabilistic failure behavior of components can be validated without having to recompute the structure. In reality, measurements, expert knowledge, and other sources of information are loaded with multiple uncertainties. For this purpose, an efficient method based on the combination of survival signature, fuzzy probability theory, and non-intrusive stochastic simulation (NISS) is proposed. This results in an efficient approach to quantify the reliability of complex systems, taking into account the entire uncertainty spectrum. The new approach, which synergizes the advantageous properties of its original components, achieves a significant decrease in computational effort due to the separation property of the survival signature. In addition, it attains a dramatic reduction in sample size due to the adapted NISS method: only a single stochastic simulation is required to account for uncertainties. The novel methodology not only represents an innovation in the field of reliability analysis, but can also be integrated into the resilience framework. For a resilience analysis of existing systems, the consideration of continuous component functionality is essential. This is addressed in a further novel development. By introducing the continuous survival function and the concept of the Diagonal Approximated Signature as a corresponding surrogate model, the existing resilience framework can be usefully extended without compromising its fundamental advantages. In the context of the regeneration of complex capital goods, a comprehensive analytical framework is presented to demonstrate the transferability and applicability of all developed methods to complex systems of any type. The framework integrates the previously developed resilience, reliability, and uncertainty analysis methods. It provides decision-makers with the basis for identifying resilient regeneration paths in two ways: first, in terms of regeneration paths with inherent resilience, and second, regeneration paths that lead to maximum system resilience, taking into account technical and monetary factors affecting the complex capital good under analysis. In summary, this dissertation offers innovative contributions to efficient resilience analysis and decision-making for complex engineering systems. It presents universally applicable methods and frameworks that are flexible enough to consider system types and performance measures of any kind. This is demonstrated in numerous case studies ranging from arbitrary flow networks, functional models of axial compressors to substructured infrastructure systems with several thousand individual components.Moderne Gesellschaften sind weltweit zunehmend von der reibungslosen Funktionalität immer komplexer werdender Systeme, wie beispielsweise Infrastruktursysteme, digitale Systeme wie das Internet oder hochentwickelten Maschinen, abhängig. Sie bilden die Eckpfeiler unserer technologisch fortgeschrittenen Welt, und ihre Effizienz steht in direktem Zusammenhang mit unserem Wohlbefinden sowie dem Fortschritt der Gesellschaft. Diese wichtigen Systeme sind jedoch einer ständigen und breiten Palette von Bedrohungen natürlichen, technischen und anthropogenen Ursprungs ausgesetzt. Das Auftreten globaler Krisen wie die COVID-19-Pandemie und die anhaltende Bedrohung durch den Klimawandel haben die Anfälligkeit der weit verzweigten und voneinander abhängigen Systeme sowie die Unmöglichkeit einer Gefahrenvorhersage in voller Gänze eindrücklich verdeutlicht. Die Pandemie mit ihren weitreichenden und unerwarteten Auswirkungen hat gezeigt, wie ein externer Schock selbst die fortschrittlichsten Systeme zum Stillstand bringen kann, während der anhaltende Klimawandel immer wieder beispiellose Risiken für die Systemstabilität und -leistung hervorbringt. Diese globalen Krisen unterstreichen den Bedarf an Systemen, die nicht nur Störungen standhalten, sondern sich auch schnell und effizient von ihnen erholen können. Das Konzept der Resilienz und die damit verbundenen Entwicklungen umfassen diese Anforderungen: Analyse, Abwägung und Optimierung der Zuverlässigkeit, Robustheit, Redundanz, Anpassungsfähigkeit und Wiederherstellbarkeit von Systemen -- sowohl aus technischer als auch aus wirtschaftlicher Sicht. In dieser kumulativen Dissertation steht daher die Entwicklung umfassender und effizienter Instrumente für die Resilienz-basierte Analyse und Entscheidungsfindung von komplexen Systemen im Mittelpunkt. Das neu entwickelte Resilienz-Entscheidungsfindungsverfahren steht im Kern dieser Entwicklungen. Es basiert auf einem adaptierten systemischen Risikomaß, einer zeitabhängigen, probabilistischen Resilienzmetrik sowie einem Gittersuchalgorithmus und stellt eine bedeutende Innovation dar, da es Entscheidungsträgern ermöglicht, ein optimales Gleichgewicht zwischen verschiedenen Arten von Resilienz-steigernden Maßnahmen unter Berücksichtigung monetärer Aspekte zu identifizieren. Zunehmend weisen Systemkomponenten eine erhebliche Eigenkomplexität auf, was dazu führt, dass sie selbst als Systeme modelliert werden müssen. Hieraus ergeben sich Systeme aus Systemen mit hoher Komplexität. Um diese Herausforderung zu adressieren, wird eine neue Methodik abgeleitet, indem das zuvor eingeführte Resilienzrahmenwerk auf multidimensionale Anwendungsfälle erweitert und synergetisch mit einem etablierten Konzept aus der Zuverlässigkeitstheorie, der Überlebenssignatur, zusammengeführt wird. Der neue Ansatz kombiniert die Vorteile beider ursprünglichen Komponenten: Einerseits ermöglicht er einen direkten Vergleich verschiedener Resilienz-steigernder Maßnahmen aus einem mehrdimensionalen Suchraum, der zu einem optimalen Kompromiss in Bezug auf die Systemresilienz führt. Andererseits ermöglicht er durch die Separationseigenschaft der Überlebenssignatur eine signifikante Reduktion des Rechenaufwands. Sobald eine Subsystemstruktur berechnet wurde -- ein typischerweise rechenintensiver Prozess -- kann jede Charakterisierung des probabilistischen Ausfallverhaltens von Komponenten validiert werden, ohne dass die Struktur erneut berechnet werden muss. In der Realität sind Messungen, Expertenwissen sowie weitere Informationsquellen mit vielfältigen Unsicherheiten belastet. Hierfür wird eine effiziente Methode vorgeschlagen, die auf der Kombination von Überlebenssignatur, unscharfer Wahrscheinlichkeitstheorie und nicht-intrusiver stochastischer Simulation (NISS) basiert. Dadurch entsteht ein effizienter Ansatz zur Quantifizierung der Zuverlässigkeit komplexer Systeme unter Berücksichtigung des gesamten Unsicherheitsspektrums. Der neue Ansatz, der die vorteilhaften Eigenschaften seiner ursprünglichen Komponenten synergetisch zusammenführt, erreicht eine bedeutende Verringerung des Rechenaufwands aufgrund der Separationseigenschaft der Überlebenssignatur. Er erzielt zudem eine drastische Reduzierung der Stichprobengröße aufgrund der adaptierten NISS-Methode: Es wird nur eine einzige stochastische Simulation benötigt, um Unsicherheiten zu berücksichtigen. Die neue Methodik stellt nicht nur eine Neuerung auf dem Gebiet der Zuverlässigkeitsanalyse dar, sondern kann auch in das Resilienzrahmenwerk integriert werden. Für eine Resilienzanalyse von real existierenden Systemen ist die Berücksichtigung kontinuierlicher Komponentenfunktionalität unerlässlich. Diese wird in einer weiteren Neuentwicklung adressiert. Durch die Einführung der kontinuierlichen Überlebensfunktion und dem Konzept der Diagonal Approximated Signature als entsprechendes Ersatzmodell kann das bestehende Resilienzrahmenwerk sinnvoll erweitert werden, ohne seine grundlegenden Vorteile zu beeinträchtigen. Im Kontext der Regeneration komplexer Investitionsgüter wird ein umfassendes Analyserahmenwerk vorgestellt, um die Übertragbarkeit und Anwendbarkeit aller entwickelten Methoden auf komplexe Systeme jeglicher Art zu demonstrieren. Das Rahmenwerk integriert die zuvor entwickelten Methoden der Resilienz-, Zuverlässigkeits- und Unsicherheitsanalyse. Es bietet Entscheidungsträgern die Basis für die Identifikation resilienter Regenerationspfade in zweierlei Hinsicht: Zum einen im Sinne von Regenerationspfaden mit inhärenter Resilienz und zum anderen Regenerationspfade, die zu einer maximalen Systemresilienz unter Berücksichtigung technischer und monetärer Einflussgrößen des zu analysierenden komplexen Investitionsgutes führen. Zusammenfassend bietet diese Dissertation innovative Beiträge zur effizienten Resilienzanalyse und Entscheidungsfindung für komplexe Ingenieursysteme. Sie präsentiert universell anwendbare Methoden und Rahmenwerke, die flexibel genug sind, um beliebige Systemtypen und Leistungsmaße zu berücksichtigen. Dies wird in zahlreichen Fallstudien von willkürlichen Flussnetzwerken, funktionalen Modellen von Axialkompressoren bis hin zu substrukturierten Infrastruktursystemen mit mehreren tausend Einzelkomponenten demonstriert
    corecore