8,768 research outputs found

    Maintenance strategy optimization for complex power systems susceptible to maintenance delays and operational dynamics

    Get PDF
    Maintenance is a necessity for most multicomponent systems, but its benefits are often accompanied by considerable costs. However, with the appropriate number of maintenance teams and a sufficiently tuned maintenance strategy, optimal system performance is attainable. Given system complexities and operational uncertainties, identifying the optimal maintenance strategy is a challenge. A robust computational framework, therefore, is proposed to alleviate these difficulties. The framework is particularly suited to systems with uncertainties in the use of spares during maintenance interventions, and where these spares are characterized by delayed availability. It is provided with a series of generally applicable multistate models that adequately define component behavior under various maintenance strategies. System operation is reconstructed from these models using an efficient hybrid load-flow and event-driven Monte Carlo simulation. The simulation's novelty stems from its ability to intuitively implement complex strategies involving multiple contrasting maintenance regimes. This framework is used to identify the optimal maintenance strategies for a hydroelectric power plant and the IEEE-24 RTS. In each case, the sensitivity of the optimal solution to cost level variations is investigated via a procedure requiring a single reliability evaluation, thereby reducing the computational costs significantly. The results show the usefulness of the framework as a rational decision-support tool in the maintenance of multicomponent multistate systems

    Efficient Reliability Modelling & Analysis of Complex Systems with Application to Nuclear Power Plant Safety

    Get PDF
    Nuclear power may be our best chance at a permanent solution to the world's energy challenges, owing to its sustainability and environmental friendliness. However, it also poses a great risk to life, property, and the economy, given the possibility of severe accidents during its generation. These accidents are a result of the susceptibility of the generating plants to component failure, human error, extreme environmental events, targeted attacks, and natural disasters. Given the complexity and high interconnectivity of the systems in question, a small glitch, otherwise known as an initiating event, could cascade to catastrophic consequences. It is, therefore, vital that the vulnerability of a plant to these glitches and their ensuing consequences be ascertained, to ensure that the appropriate mitigating actions are taken. The reliability of a system is the likelihood that it survives a defined period and its availability is the likelihood of it being capable of performing its required functions on demand. These quantities are important to a nuclear power plant's safety because, a nuclear power plant by default is equipped with safety systems to inhibit the propagation of an initiating event. An accident ensues if the safety systems required to mitigate some initiating event are unavailable or incapacitated by the initiating event. It is, therefore, easy to see that the reliability, as well as the availability of these systems, shape the safety of the plant. These crucial quantities, currently, are estimated using legacy techniques like static fault and event tree analyses or their derivatives. Despite their popularity and widely acclaimed success, these legacy techniques lack the flexibility to implement fully the operational dynamics of the majority of systems. Most importantly, their ease of application deteriorates with increasing system size and complexity, such that the analyst is often forced to make unrealistic assumptions. These unrealistic assumptions sometimes compromise the accuracy of the results obtained and subsequently, the quality of the risk management decisions reached. Their inadequacy is often amplified if the system is composed of multi-state components or characterised by epistemic uncertainties, induced by vague or imprecise data. The ideal approach, therefore, should be sufficiently robust to not necessitate unrealistic assumptions but flexible enough to accommodate realistic system attributes, while guaranteeing accuracy. This dissertation provides a detailed account of a series of computationally efficient system reliability analysis techniques proposed to address the limitations of the existing probabilistic risk assessment approaches. The proposed techniques are based mainly, on an advanced hybrid event-driven Monte Carlo simulation technique that invokes load-flow principles to resolve, intuitively, the difficulties associated with the topological complexity of systems and the multi-state attributes of their components. In addition to their intuitiveness and relative completeness, a key advantage of the proposed techniques is their general applicability. They have been applied, for instance, to a variety of problems, ranging from the production availability of an offshore oil installation and the maintenance strategy optimization of the IEEE-24 bus test system to the probabilistic risk assessment of station blackout accidents at the Maanshan nuclear power plant in Taiwan. The proposed techniques, therefore, should influence robust decisions in the risk management of not only nuclear power plants but other critical systems as well. They have been incorporated into the open-source uncertainty quantification tool, OpenCossan, to render them readily available to industry and other researchers

    Developing a diagnostic heuristic for integrated sugarcane supply and processing systems.

    Get PDF
    Doctoral Degrees. University of KwaZulu-Natal, Pietermaritzburg.Innovation is a valuable asset that gives supply chains a competitive edge. Moreover, the adoption of innovative research recommendations in agricultural value chains and integrated sugarcane supply and processing systems (ISSPS) in particular has been relatively slow when compared with other industries such as electronics and automotive. The slow adoption is attributed to the complex, multidimensional nature of ISSPS and the perceived lack of a holistic approach when dealing with certain issues. Most of the interventions into ISSPS often view the system as characterised by tame problems hence, the widespread application of traditional operations research approaches. Integrated sugarcane supply and processing systems are, nonetheless, also characterised by wicked problems. Interventions into such contexts should therefore, embrace tame and/or wicked issues. Systemic approaches are important and have in the past identified several system-scale opportunities within ISSPS. Such interventions are multidisciplinary and employ a range of methodologies spanning across paradigms. The large number of methodologies available, however, makes choosing the right method or a combination thereof difficult. In this context, a novel overarching diagnostic heuristic for ISSPS was developed in this research. The heuristic will be used todiagnose relatively small, but pertinent ISSPS constraints and opportunities. The heuristic includes a causal model that determines and ranks linkages between the many domains that govern integrated agricultural supply and processing systems (IASPS) viz. biophysical, collaboration, culture, economics, environment, future strategy, information sharing, political forces, and structures. Furthermore, a diagnostic toolkit based on the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) was developed. The toolkit comprises a diagnostic criteria and a suite of systemic tools. The toolkit, in addition, determines thesuitability of each tool to diagnose any of the IASPS domains. Overall, the diagnostic criteria include accessibility, interactiveness, transparency, iterativeness, feedback, cause-and-effect logic, and time delays. The tools considered for the toolkit were current reality trees, fuzzy cognitive maps (FCMs), network analysis approaches, rich pictures (RP), stock and flow diagrams, cause and effect diagrams (CEDs), and causal loop diagrams (CLDs). Results from the causal model indicate that collaboration, structure and information sharing had a high direct leverage over the other domains as these were associated with a larger number of linkages. Collaboration and structure further provided dynamic leverage as these were also part of feedback loops. Political forces and the culture domain in contrast, provided lowleverage as these domains were only directly linked to collaboration. It was further revealed that each tool provides a different facet to complexity hence, the need for methodological pluralism. All the tools except RP could be applied, to a certain extent, across both appreciation and analysis criteria. Rich pictures do not have causal analysis capabilities viz. cause-and-effect logic, time delays and feedback. Stock and flow diagrams and CLDs conversely, met all criteria. All the diagnostic tools in the toolkit could be used across all the system domains except for FCMs. Fuzzy cognitive maps are explicitly subjective and their contribution lies outside the objective world. Caution should therefore be practiced when FCMs areapplied within the biophysical domain. The heuristic is only an aid to decision making. The decision to select a tool or a combination thereof remains with the user(s). Even though the heuristic was demonstrated at Mhlume sugarcane milling area, it is recommended that other areas be considered for future research. The heuristic itself should continuously be updated with criteria, tools and other domain dimensions

    Mean-Field-Type Games in Engineering

    Full text link
    A mean-field-type game is a game in which the instantaneous payoffs and/or the state dynamics functions involve not only the state and the action profile but also the joint distributions of state-action pairs. This article presents some engineering applications of mean-field-type games including road traffic networks, multi-level building evacuation, millimeter wave wireless communications, distributed power networks, virus spread over networks, virtual machine resource management in cloud networks, synchronization of oscillators, energy-efficient buildings, online meeting and mobile crowdsensing.Comment: 84 pages, 24 figures, 183 references. to appear in AIMS 201

    Railway operations in icing conditions: a review of issues and mitigation methods

    Get PDF
    This article focuses on studying the current literature about railway operations in icing conditions, identifying icing effects on railway infrastructure, rolling stock, and operations, and summarizing the existing solutions for addressing these issues. Even though various studies have been conducted in the past on the impact of winter, climate change, and low temperatures on railway operations, not much work has been done on optimizing railway operations under icing conditions. This study demonstrates that further research is needed to better understand ice accretion and its effects on different parts of railways. It appears that railway infrastructure faces serious problems during icing conditions, and additional research in this field is required to precisely identify the problems and suggest solutions. Therefore, it is important to enhance the knowledge in this area and suitable optimal and cost-effective ice mitigation methods to minimize icing effects on railway operations and safety

    GPT Models in Construction Industry: Opportunities, Limitations, and a Use Case Validation

    Full text link
    Large Language Models(LLMs) trained on large data sets came into prominence in 2018 after Google introduced BERT. Subsequently, different LLMs such as GPT models from OpenAI have been released. These models perform well on diverse tasks and have been gaining widespread applications in fields such as business and education. However, little is known about the opportunities and challenges of using LLMs in the construction industry. Thus, this study aims to assess GPT models in the construction industry. A critical review, expert discussion and case study validation are employed to achieve the study objectives. The findings revealed opportunities for GPT models throughout the project lifecycle. The challenges of leveraging GPT models are highlighted and a use case prototype is developed for materials selection and optimization. The findings of the study would be of benefit to researchers, practitioners and stakeholders, as it presents research vistas for LLMs in the construction industry.Comment: 58 pages, 20 figure

    Distributed Predictive Control for MVDC Shipboard Power System Management

    Get PDF
    Shipboard Power System (SPS) is known as an independent controlled small electric network powered by the distributed onboard generation system. Since many electric components are tightly coupled in a small space and the system is not supported with a relatively stronger grid, SPS is more susceptible to unexpected disturbances and physical damages compared to conventional terrestrial power systems. Among different distribution configurations, power-electronic based DC distribution is considered the trending technology for the next-generation U.S. Navy fleet design to replace the conventional AC-based distribution. This research presents appropriate control management frameworks to improve the Medium-Voltage DC (MVDC) shipboard power system performance. Model Predictive Control (MPC) is an advanced model-based approach which uses the system model to predict the future output states and generates an optimal control sequence over the prediction horizon. In this research, at first, a centralized MPC is developed for a nonlinear MVDC SPS when a high-power pulsed load exists in the system. The closed-loop stability analysis is considered in the MPC optimization problem. A comparison is presented for different cases of load prediction for MPC, namely, no prediction, perfect prediction, and Autoregressive Integrated Moving Average (ARIMA) prediction. Another centralized MPC controller is also designed to address the reconfiguration problem of the MVDC system in abnormal conditions. The reconfiguration goal is to maximize the power delivered to the loads with respect to power balance, generation limits and load priorities. Moreover, a distributed control structure is proposed for a nonlinear MVDC SPS to develop a scalable power management architecture. In this framework, each subsystem is controlled by a local MPC using its state variables, parameters and interaction variables from other subsystems communicated through a coordinator. The Goal Coordination principle is used to manage interactions between subsystems. The developed distributed control structure brings out several significant advantages including less computational overhead, higher flexibility and a good error tolerance behavior as well as a good overall system performance. To demonstrate the efficiency of the proposed approach, a performance analysis is accomplished by comparing centralized and distributed control of global and partitioned MVDC models for two cases of continuous and discretized control inputs

    GTTC Future of Ground Testing Meta-Analysis of 20 Documents

    Get PDF
    National research, development, test, and evaluation ground testing capabilities in the United States are at risk. There is a lack of vision and consensus on what is and will be needed, contributing to a significant threat that ground test capabilities may not be able to meet the national security and industrial needs of the future. To support future decisions, the AIAA Ground Testing Technical Committees (GTTC) Future of Ground Test (FoGT) Working Group selected and reviewed 20 seminal documents related to the application and direction of ground testing. Each document was reviewed, with the content main points collected and organized into sections in the form of a gap analysis current state, future state, major challenges/gaps, and recommendations. This paper includes key findings and selected commentary by an editing team
    corecore