575 research outputs found

    Enhancing remanufacturing automation using deep learning approach

    Get PDF
    In recent years, remanufacturing has significant interest from researchers and practitioners to improve efficiency through maximum value recovery of products at end-of-life (EoL). It is a process of returning used products, known as EoL products, to as-new condition with matching or higher warranty than the new products. However, these remanufacturing processes are complex and time-consuming to implement manually, causing reduced productivity and posing dangers to personnel. These challenges require automating the various remanufacturing process stages to achieve higher throughput, reduced lead time, cost and environmental impact while maximising economic gains. Besides, as highlighted by various research groups, there is currently a shortage of adequate remanufacturing-specific technologies to achieve full automation. -- This research explores automating remanufacturing processes to improve competitiveness by analysing and developing deep learning-based models for automating different stages of the remanufacturing processes. Analysing deep learning algorithms represents a viable option to investigate and develop technologies with capabilities to overcome the outlined challenges. Deep learning involves using artificial neural networks to learn high-level abstractions in data. Deep learning (DL) models are inspired by human brains and have produced state-of-the-art results in pattern recognition, object detection and other applications. The research further investigates the empirical data of torque converter components recorded from a remanufacturing facility in Glasgow, UK, using the in-case and cross-case analysis to evaluate the remanufacturing inspection, sorting, and process control applications. -- Nevertheless, the developed algorithm helped capture, pre-process, train, deploy and evaluate the performance of the respective processes. The experimental evaluation of the in-case and cross-case analysis using model prediction accuracy, misclassification rate, and model loss highlights that the developed models achieved a high prediction accuracy of above 99.9% across the sorting, inspection and process control applications. Furthermore, a low model loss between 3x10-3 and 1.3x10-5 was obtained alongside a misclassification rate that lies between 0.01% to 0.08% across the three applications investigated, thereby highlighting the capability of the developed deep learning algorithms to perform the sorting, process control and inspection in remanufacturing. The results demonstrate the viability of adopting deep learning-based algorithms in automating remanufacturing processes, achieving safer and more efficient remanufacturing. -- Finally, this research is unique because it is the first to investigate using deep learning and qualitative torque-converter image data for modelling remanufacturing sorting, inspection and process control applications. It also delivers a custom computational model that has the potential to enhance remanufacturing automation when utilised. The findings and publications also benefit both academics and industrial practitioners. Furthermore, the model is easily adaptable to other remanufacturing applications with minor modifications to enhance process efficiency in today's workplaces.In recent years, remanufacturing has significant interest from researchers and practitioners to improve efficiency through maximum value recovery of products at end-of-life (EoL). It is a process of returning used products, known as EoL products, to as-new condition with matching or higher warranty than the new products. However, these remanufacturing processes are complex and time-consuming to implement manually, causing reduced productivity and posing dangers to personnel. These challenges require automating the various remanufacturing process stages to achieve higher throughput, reduced lead time, cost and environmental impact while maximising economic gains. Besides, as highlighted by various research groups, there is currently a shortage of adequate remanufacturing-specific technologies to achieve full automation. -- This research explores automating remanufacturing processes to improve competitiveness by analysing and developing deep learning-based models for automating different stages of the remanufacturing processes. Analysing deep learning algorithms represents a viable option to investigate and develop technologies with capabilities to overcome the outlined challenges. Deep learning involves using artificial neural networks to learn high-level abstractions in data. Deep learning (DL) models are inspired by human brains and have produced state-of-the-art results in pattern recognition, object detection and other applications. The research further investigates the empirical data of torque converter components recorded from a remanufacturing facility in Glasgow, UK, using the in-case and cross-case analysis to evaluate the remanufacturing inspection, sorting, and process control applications. -- Nevertheless, the developed algorithm helped capture, pre-process, train, deploy and evaluate the performance of the respective processes. The experimental evaluation of the in-case and cross-case analysis using model prediction accuracy, misclassification rate, and model loss highlights that the developed models achieved a high prediction accuracy of above 99.9% across the sorting, inspection and process control applications. Furthermore, a low model loss between 3x10-3 and 1.3x10-5 was obtained alongside a misclassification rate that lies between 0.01% to 0.08% across the three applications investigated, thereby highlighting the capability of the developed deep learning algorithms to perform the sorting, process control and inspection in remanufacturing. The results demonstrate the viability of adopting deep learning-based algorithms in automating remanufacturing processes, achieving safer and more efficient remanufacturing. -- Finally, this research is unique because it is the first to investigate using deep learning and qualitative torque-converter image data for modelling remanufacturing sorting, inspection and process control applications. It also delivers a custom computational model that has the potential to enhance remanufacturing automation when utilised. The findings and publications also benefit both academics and industrial practitioners. Furthermore, the model is easily adaptable to other remanufacturing applications with minor modifications to enhance process efficiency in today's workplaces

    Bayesian-Based Predictive Analytics for Manufacturing Performance Metrics in the Era of Industry 4.0

    Get PDF
    The research in this dissertation proposes Bayesian-based predictive analytics for modeling and prediction of the manufacturing metrics such as cutting force, tool life and reliability in the technological era of Industry 4.0. Bayesian statistics is a probabilistic method, which can quantify and minimize manufacturing process uncertainties. The Bayesian method combines previous knowledge about the manufacturing models with experimental data to predict the manufacturing metrics

    Advanced system engineering approaches to dynamic modelling of human factors and system safety in sociotechnical systems

    Get PDF
    Sociotechnical systems (STSs) indicate complex operational processes composed of interactive and dependent social elements, organizational and human activities. This research work seeks to fill some important knowledge gaps in system safety performance and human factors analysis using in STSs. First, an in-depth critical analysis is conducted to explore state-of-the-art findings, needs, gaps, key challenges, and research opportunities in human reliability and factors analysis (HR&FA). Accordingly, a risk model is developed to capture the dynamic nature of different systems failures and integrated them into system safety barriers under uncertainty as per Safety-I paradigm. This is followed by proposing a novel dynamic human-factor risk model tailored for assessing system safety in STSs based on Safety-II concepts. This work is extended to further explore system safety using Performance Shaping Factors (PSFs) by proposing a systematic approach to identify PSFs and quantify their importance level and influence on the performance of sociotechnical systems’ functions. Finally, a systematic review is conducted to provide a holistic profile of HR&FA in complex STSs with a deep focus on revealing the contribution of artificial intelligence and expert systems over HR&FA in complex systems. The findings reveal that proposed models can effectively address critical challenges associated with system safety and human factors quantification. It also trues about uncertainty characterization using the proposed models. Furthermore, the proposed advanced probabilistic model can better model evolving dependencies among system safety performance factors. It revealed the critical safety investment factors among different sociotechnical elements and contributing factors. This helps to effectively allocate safety countermeasures to improve resilience and system safety performance. This research work would help better understand, analyze, and improve the system safety and human factors performance in complex sociotechnical systems

    Approximate model composition for explanation generation

    Get PDF
    This thesis presents a framework for the formulation of knowledge models to sup¬ port the generation of explanations for engineering systems that are represented by the resulting models. Such models are automatically assembled from instantiated generic component descriptions, known as modelfragments. The model fragments are of suffi¬ cient detail that generally satisfies the requirements of information content as identified by the user asking for explanations. Through a combination of fuzzy logic based evidence preparation, which exploits the history of prior user preferences, and an approximate reasoning inference engine, with a Bayesian evidence propagation mechanism, different uncertainty sources can be han¬ dled. Model fragments, each representing structural or behavioural aspects of a com¬ ponent of the domain system of interest, are organised in a library. Those fragments that represent the same domain system component, albeit with different representation detail, form parts of the same assumption class in the library. Selected fragments are assembled to form an overall system model, prior to extraction of any textual infor¬ mation upon which to base the explanations. The thesis proposes and examines the techniques that support the fragment selection mechanism and the assembly of these fragments into models. In particular, a Bayesian network-based model fragment selection mechanism is de¬ scribed that forms the core of the work. The network structure is manually determined prior to any inference, based on schematic information regarding the connectivity of the components present in the domain system under consideration. The elicitation of network probabilities, on the other hand is completely automated using probability elicitation heuristics. These heuristics aim to provide the information required to select fragments which are maximally compatible with the given evidence of the fragments preferred by the user. Given such initial evidence, an existing evidence propagation algorithm is employed. The preparation of the evidence for the selection of certain fragments, based on user preference, is performed by a fuzzy reasoning evidence fab¬ rication engine. This engine uses a set of fuzzy rules and standard fuzzy reasoning mechanisms, attempting to guess the information needs of the user and suggesting the selection of fragments of sufficient detail to satisfy such needs. Once the evidence is propagated, a single fragment is selected for each of the domain system compo¬ nents and hence, the final model of the entire system is constructed. Finally, a highly configurable XML-based mechanism is employed to extract explanation content from the newly formulated model and to structure the explanatory sentences for the final explanation that will be communicated to the user. The framework is illustratively applied to a number of domain systems and is compared qualitatively to existing compositional modelling methodologies. A further empirical assessment of the performance of the evidence propagation algorithm is carried out to determine its performance limits. Performance is measured against the number of frag¬ ments that represent each of the components of a large domain system, and the amount of connectivity permitted in the Bayesian network between the nodes that stand for the selection or rejection of these fragments. Based on this assessment recommenda¬ tions are made as to how the framework may be optimised to cope with real world applications

    Concepts of Model Verification and Validation

    Full text link

    A queuing location-allocation model for a capacitated health care system

    Get PDF
    International audienceThe aim of the present paper is to propose a location-allocation model for a capacitated health care system. This paper develops a discrete modeling framework to determine the optimal number of facilities among candidates and optimal allocations of the existing customers to operating health centers in a coverage distance. In doing so, the total sum of customer and operating facility costs is minimized. Our goal is to create a model that is more practical in the real world. Therefore, setup costs of hospitals are based on the costs of customers, xed costs of establishing health centers, and costs based on the available resources in each level of hospitals. In this paper, the idea of hierarchical structure has been used. There are two levels of service in hospitals, including low and high levels, and sections at diierent levels that provide diierent types of services. The patients refer to diierent sections of the hospital according to their requirements. To solve the model, two meta-heuristic algorithms, including genetic and simulated annealing algorithms and their combination, are proposed. To evaluate the performance of the three algorithms, some numerical examples are produced and analyzed using the statistical test in order to determine which algorithm works better

    Generative AI in the Construction Industry: Opportunities & Challenges

    Full text link
    In the last decade, despite rapid advancements in artificial intelligence (AI) transforming many industry practices, construction largely lags in adoption. Recently, the emergence and rapid adoption of advanced large language models (LLM) like OpenAI's GPT, Google's PaLM, and Meta's Llama have shown great potential and sparked considerable global interest. However, the current surge lacks a study investigating the opportunities and challenges of implementing Generative AI (GenAI) in the construction sector, creating a critical knowledge gap for researchers and practitioners. This underlines the necessity to explore the prospects and complexities of GenAI integration. Bridging this gap is fundamental to optimizing GenAI's early-stage adoption within the construction sector. Given GenAI's unprecedented capabilities to generate human-like content based on learning from existing content, we reflect on two guiding questions: What will the future bring for GenAI in the construction industry? What are the potential opportunities and challenges in implementing GenAI in the construction industry? This study delves into reflected perception in literature, analyzes the industry perception using programming-based word cloud and frequency analysis, and integrates authors' opinions to answer these questions. This paper recommends a conceptual GenAI implementation framework, provides practical recommendations, summarizes future research questions, and builds foundational literature to foster subsequent research expansion in GenAI within the construction and its allied architecture & engineering domains

    Capturing Risk in Capital Budgeting

    Get PDF
    NPS NRP Technical ReportThis proposed research has the goal of proposing novel, reusable, extensible, adaptable, and comprehensive advanced analytical process and Integrated Risk Management to help the (DOD) with risk-based capital budgeting, Monte Carlo risk-simulation, predictive analytics, and stochastic optimization of acquisitions and programs portfolios with multiple competing stakeholders while subject to budgetary, risk, schedule, and strategic constraints. The research covers topics of traditional capital budgeting methodologies used in industry, including the market, cost, and income approaches, and explains how some of these traditional methods can be applied in the DOD by using DOD-centric non-economic, logistic, readiness, capabilities, and requirements variables. Stochastic portfolio optimization with dynamic simulations and investment efficient frontiers will be run for the purposes of selecting the best combination of programs and capabilities is also addressed, as are other alternative methods such as average ranking, risk metrics, lexicographic methods, PROMETHEE, ELECTRE, and others. The results include actionable intelligence developed from an analytically robust case study that senior leadership at the DOD may utilize to make optimal decisions. The main deliverables will be a detailed written research report and presentation brief on the approach of capturing risk and uncertainty in capital budgeting analysis. The report will detail the proposed methodology and applications, as well as a summary case study and examples of how the methodology can be applied.N8 - Integration of Capabilities & ResourcesThis research is supported by funding from the Naval Postgraduate School, Naval Research Program (PE 0605853N/2098). https://nps.edu/nrpChief of Naval Operations (CNO)Approved for public release. Distribution is unlimited.

    Comparative analysis of optimal power flow in renewable energy sources based microgrids

    Get PDF
    Adaptation of renewable energy is inevitable. The idea of microgrid offers integration of renewable energy sources with conventional power generation sources. In this research, an operative approach was proposed for microgrids comprising of four different power generation sources. The microgrid is a way that mixes energy locally and empowers the end-users to add useful power to the network. IEEE-14 bus system-based microgrid was developed in MATLAB/Simulink to demonstrate the optimal power flow. Two cases of battery charging and discharging were also simulated to evaluate its realization. The solution of power flow analysis was obtained from the Newton–Raphson method and particle swarm optimization method. A comparison was drawn between these methods for the proposed model of the microgrid on the basis of transmission line losses and voltage profile. Transmission line losses are reduced to about 17% in the case of battery charging and 19 to 20% in the case of battery discharging when system was analyzed with the particle swarm optimization. Particle swarm optimization was found more promising for the deliverance of optimal power flow in the renewable energy sources-based microgrid

    Effective planning of-end-of-life scenarios for offshore windfarm

    Get PDF
    Many offshore wind turbines (OWTs) are approaching the end of their estimated operational life soon. It is challenging to develop a general decommissioning procedure for all OW farms. Therefore, this research aims to comprehend the available end-of-life (EoL) scenario for OWTs to decide on their application procedures and propose an innovative systematic framework for considering the EoL scenario. The first part of the research critically reviewed the various end-of-life strategies for offshore wind farms, available technological options and the influencing factors that can inform such decisions. The study proposed a multi-attribute framework for supporting optimum choices in terms of main constraints, such as the possibility of end-of-life strategies based on unique characteristics and influencing factors. In the selection of techno-economic, the primary procedure parameters influencing the three major end-life strategies, i.e. life extension, repowering, and decommissioning, are discussed, and the benefits and issues related to the influencing variables are also identified. In the next part, an initial comparative assessment between two of these scenarios, repowering and decommissioning, through a purpose-developed techno-economic analysis model calculates relevant key performance indicators. With numerous OW farms approaching the end of service life, the discussion on planning the most appropriate EoL scenario has become popular. Planning and scheduling those main activities of EoL scenarios depends on forecasting leading environmental indicators such as significant wave height. This research proposes a novel probabilistic methodology based on multivariate and univariate time series forecasting of machine learning (ML) models, including LSTM, BiLSTM, and GRU. In the end, the role of optimum selection of end-of-life scenarios is investigated to achieve the highest profitability of offshore wind farms. Various end-of-life scenarios have been evaluated through a TOPSIS technique as a multi-criteria decision-making procedure to determine an appropriate way according to environmental, financial, safety Criteria, Schedule impact, and Legislation and guidelines. Keywords: Offshore Wind Turbine; Decommissioning; End-of-life scenarios; Decision making; Levelized Cost of Energy; Machine learning, ForecastingMany offshore wind turbines (OWTs) are approaching the end of their estimated operational life soon. It is challenging to develop a general decommissioning procedure for all OW farms. Therefore, this research aims to comprehend the available end-of-life (EoL) scenario for OWTs to decide on their application procedures and propose an innovative systematic framework for considering the EoL scenario. The first part of the research critically reviewed the various end-of-life strategies for offshore wind farms, available technological options and the influencing factors that can inform such decisions. The study proposed a multi-attribute framework for supporting optimum choices in terms of main constraints, such as the possibility of end-of-life strategies based on unique characteristics and influencing factors. In the selection of techno-economic, the primary procedure parameters influencing the three major end-life strategies, i.e. life extension, repowering, and decommissioning, are discussed, and the benefits and issues related to the influencing variables are also identified. In the next part, an initial comparative assessment between two of these scenarios, repowering and decommissioning, through a purpose-developed techno-economic analysis model calculates relevant key performance indicators. With numerous OW farms approaching the end of service life, the discussion on planning the most appropriate EoL scenario has become popular. Planning and scheduling those main activities of EoL scenarios depends on forecasting leading environmental indicators such as significant wave height. This research proposes a novel probabilistic methodology based on multivariate and univariate time series forecasting of machine learning (ML) models, including LSTM, BiLSTM, and GRU. In the end, the role of optimum selection of end-of-life scenarios is investigated to achieve the highest profitability of offshore wind farms. Various end-of-life scenarios have been evaluated through a TOPSIS technique as a multi-criteria decision-making procedure to determine an appropriate way according to environmental, financial, safety Criteria, Schedule impact, and Legislation and guidelines. Keywords: Offshore Wind Turbine; Decommissioning; End-of-life scenarios; Decision making; Levelized Cost of Energy; Machine learning, Forecastin
    corecore