4,041 research outputs found

    Improving the efficacy of the lean index through the quantification of qualitative lean metrics

    Get PDF
    Multiple lean metrics representing performance for various aspects of lean can be consolidated into one holistic measure for lean, called the lean index, of which there are two types. In this article it was established that the qualitative based lean index are subjective while the quantitative types lack scope. Subsequently, an appraisal is done on techniques for quantifying qualitative lean metrics so that the lean index is a hybrid of both, increasing the confidence in the information derived using the lean index. This ensures every detail of lean within a system is quantified, allowing daily tracking of lean. The techniques are demonstrated in a print packaging manufacturing case

    Efficient resilience analysis and decision-making for complex engineering systems

    Get PDF
    Modern societies around the world are increasingly dependent on the smooth functionality of progressively more complex systems, such as infrastructure systems, digital systems like the internet, and sophisticated machinery. They form the cornerstones of our technologically advanced world and their efficiency is directly related to our well-being and the progress of society. However, these important systems are constantly exposed to a wide range of threats of natural, technological, and anthropogenic origin. The emergence of global crises such as the COVID-19 pandemic and the ongoing threat of climate change have starkly illustrated the vulnerability of these widely ramified and interdependent systems, as well as the impossibility of predicting threats entirely. The pandemic, with its widespread and unexpected impacts, demonstrated how an external shock can bring even the most advanced systems to a standstill, while the ongoing climate change continues to produce unprecedented risks to system stability and performance. These global crises underscore the need for systems that can not only withstand disruptions, but also, recover from them efficiently and rapidly. The concept of resilience and related developments encompass these requirements: analyzing, balancing, and optimizing the reliability, robustness, redundancy, adaptability, and recoverability of systems -- from both technical and economic perspectives. This cumulative dissertation, therefore, focuses on developing comprehensive and efficient tools for resilience-based analysis and decision-making of complex engineering systems. The newly developed resilience decision-making procedure is at the core of these developments. It is based on an adapted systemic risk measure, a time-dependent probabilistic resilience metric, as well as a grid search algorithm, and represents a significant innovation as it enables decision-makers to identify an optimal balance between different types of resilience-enhancing measures, taking into account monetary aspects. Increasingly, system components have significant inherent complexity, requiring them to be modeled as systems themselves. Thus, this leads to systems-of-systems with a high degree of complexity. To address this challenge, a novel methodology is derived by extending the previously introduced resilience framework to multidimensional use cases and synergistically merging it with an established concept from reliability theory, the survival signature. The new approach combines the advantages of both original components: a direct comparison of different resilience-enhancing measures from a multidimensional search space leading to an optimal trade-off in terms of system resilience, and a significant reduction in computational effort due to the separation property of the survival signature. It enables that once a subsystem structure has been computed -- a typically computational expensive process -- any characterization of the probabilistic failure behavior of components can be validated without having to recompute the structure. In reality, measurements, expert knowledge, and other sources of information are loaded with multiple uncertainties. For this purpose, an efficient method based on the combination of survival signature, fuzzy probability theory, and non-intrusive stochastic simulation (NISS) is proposed. This results in an efficient approach to quantify the reliability of complex systems, taking into account the entire uncertainty spectrum. The new approach, which synergizes the advantageous properties of its original components, achieves a significant decrease in computational effort due to the separation property of the survival signature. In addition, it attains a dramatic reduction in sample size due to the adapted NISS method: only a single stochastic simulation is required to account for uncertainties. The novel methodology not only represents an innovation in the field of reliability analysis, but can also be integrated into the resilience framework. For a resilience analysis of existing systems, the consideration of continuous component functionality is essential. This is addressed in a further novel development. By introducing the continuous survival function and the concept of the Diagonal Approximated Signature as a corresponding surrogate model, the existing resilience framework can be usefully extended without compromising its fundamental advantages. In the context of the regeneration of complex capital goods, a comprehensive analytical framework is presented to demonstrate the transferability and applicability of all developed methods to complex systems of any type. The framework integrates the previously developed resilience, reliability, and uncertainty analysis methods. It provides decision-makers with the basis for identifying resilient regeneration paths in two ways: first, in terms of regeneration paths with inherent resilience, and second, regeneration paths that lead to maximum system resilience, taking into account technical and monetary factors affecting the complex capital good under analysis. In summary, this dissertation offers innovative contributions to efficient resilience analysis and decision-making for complex engineering systems. It presents universally applicable methods and frameworks that are flexible enough to consider system types and performance measures of any kind. This is demonstrated in numerous case studies ranging from arbitrary flow networks, functional models of axial compressors to substructured infrastructure systems with several thousand individual components.Moderne Gesellschaften sind weltweit zunehmend von der reibungslosen Funktionalität immer komplexer werdender Systeme, wie beispielsweise Infrastruktursysteme, digitale Systeme wie das Internet oder hochentwickelten Maschinen, abhängig. Sie bilden die Eckpfeiler unserer technologisch fortgeschrittenen Welt, und ihre Effizienz steht in direktem Zusammenhang mit unserem Wohlbefinden sowie dem Fortschritt der Gesellschaft. Diese wichtigen Systeme sind jedoch einer ständigen und breiten Palette von Bedrohungen natürlichen, technischen und anthropogenen Ursprungs ausgesetzt. Das Auftreten globaler Krisen wie die COVID-19-Pandemie und die anhaltende Bedrohung durch den Klimawandel haben die Anfälligkeit der weit verzweigten und voneinander abhängigen Systeme sowie die Unmöglichkeit einer Gefahrenvorhersage in voller Gänze eindrücklich verdeutlicht. Die Pandemie mit ihren weitreichenden und unerwarteten Auswirkungen hat gezeigt, wie ein externer Schock selbst die fortschrittlichsten Systeme zum Stillstand bringen kann, während der anhaltende Klimawandel immer wieder beispiellose Risiken für die Systemstabilität und -leistung hervorbringt. Diese globalen Krisen unterstreichen den Bedarf an Systemen, die nicht nur Störungen standhalten, sondern sich auch schnell und effizient von ihnen erholen können. Das Konzept der Resilienz und die damit verbundenen Entwicklungen umfassen diese Anforderungen: Analyse, Abwägung und Optimierung der Zuverlässigkeit, Robustheit, Redundanz, Anpassungsfähigkeit und Wiederherstellbarkeit von Systemen -- sowohl aus technischer als auch aus wirtschaftlicher Sicht. In dieser kumulativen Dissertation steht daher die Entwicklung umfassender und effizienter Instrumente für die Resilienz-basierte Analyse und Entscheidungsfindung von komplexen Systemen im Mittelpunkt. Das neu entwickelte Resilienz-Entscheidungsfindungsverfahren steht im Kern dieser Entwicklungen. Es basiert auf einem adaptierten systemischen Risikomaß, einer zeitabhängigen, probabilistischen Resilienzmetrik sowie einem Gittersuchalgorithmus und stellt eine bedeutende Innovation dar, da es Entscheidungsträgern ermöglicht, ein optimales Gleichgewicht zwischen verschiedenen Arten von Resilienz-steigernden Maßnahmen unter Berücksichtigung monetärer Aspekte zu identifizieren. Zunehmend weisen Systemkomponenten eine erhebliche Eigenkomplexität auf, was dazu führt, dass sie selbst als Systeme modelliert werden müssen. Hieraus ergeben sich Systeme aus Systemen mit hoher Komplexität. Um diese Herausforderung zu adressieren, wird eine neue Methodik abgeleitet, indem das zuvor eingeführte Resilienzrahmenwerk auf multidimensionale Anwendungsfälle erweitert und synergetisch mit einem etablierten Konzept aus der Zuverlässigkeitstheorie, der Überlebenssignatur, zusammengeführt wird. Der neue Ansatz kombiniert die Vorteile beider ursprünglichen Komponenten: Einerseits ermöglicht er einen direkten Vergleich verschiedener Resilienz-steigernder Maßnahmen aus einem mehrdimensionalen Suchraum, der zu einem optimalen Kompromiss in Bezug auf die Systemresilienz führt. Andererseits ermöglicht er durch die Separationseigenschaft der Überlebenssignatur eine signifikante Reduktion des Rechenaufwands. Sobald eine Subsystemstruktur berechnet wurde -- ein typischerweise rechenintensiver Prozess -- kann jede Charakterisierung des probabilistischen Ausfallverhaltens von Komponenten validiert werden, ohne dass die Struktur erneut berechnet werden muss. In der Realität sind Messungen, Expertenwissen sowie weitere Informationsquellen mit vielfältigen Unsicherheiten belastet. Hierfür wird eine effiziente Methode vorgeschlagen, die auf der Kombination von Überlebenssignatur, unscharfer Wahrscheinlichkeitstheorie und nicht-intrusiver stochastischer Simulation (NISS) basiert. Dadurch entsteht ein effizienter Ansatz zur Quantifizierung der Zuverlässigkeit komplexer Systeme unter Berücksichtigung des gesamten Unsicherheitsspektrums. Der neue Ansatz, der die vorteilhaften Eigenschaften seiner ursprünglichen Komponenten synergetisch zusammenführt, erreicht eine bedeutende Verringerung des Rechenaufwands aufgrund der Separationseigenschaft der Überlebenssignatur. Er erzielt zudem eine drastische Reduzierung der Stichprobengröße aufgrund der adaptierten NISS-Methode: Es wird nur eine einzige stochastische Simulation benötigt, um Unsicherheiten zu berücksichtigen. Die neue Methodik stellt nicht nur eine Neuerung auf dem Gebiet der Zuverlässigkeitsanalyse dar, sondern kann auch in das Resilienzrahmenwerk integriert werden. Für eine Resilienzanalyse von real existierenden Systemen ist die Berücksichtigung kontinuierlicher Komponentenfunktionalität unerlässlich. Diese wird in einer weiteren Neuentwicklung adressiert. Durch die Einführung der kontinuierlichen Überlebensfunktion und dem Konzept der Diagonal Approximated Signature als entsprechendes Ersatzmodell kann das bestehende Resilienzrahmenwerk sinnvoll erweitert werden, ohne seine grundlegenden Vorteile zu beeinträchtigen. Im Kontext der Regeneration komplexer Investitionsgüter wird ein umfassendes Analyserahmenwerk vorgestellt, um die Übertragbarkeit und Anwendbarkeit aller entwickelten Methoden auf komplexe Systeme jeglicher Art zu demonstrieren. Das Rahmenwerk integriert die zuvor entwickelten Methoden der Resilienz-, Zuverlässigkeits- und Unsicherheitsanalyse. Es bietet Entscheidungsträgern die Basis für die Identifikation resilienter Regenerationspfade in zweierlei Hinsicht: Zum einen im Sinne von Regenerationspfaden mit inhärenter Resilienz und zum anderen Regenerationspfade, die zu einer maximalen Systemresilienz unter Berücksichtigung technischer und monetärer Einflussgrößen des zu analysierenden komplexen Investitionsgutes führen. Zusammenfassend bietet diese Dissertation innovative Beiträge zur effizienten Resilienzanalyse und Entscheidungsfindung für komplexe Ingenieursysteme. Sie präsentiert universell anwendbare Methoden und Rahmenwerke, die flexibel genug sind, um beliebige Systemtypen und Leistungsmaße zu berücksichtigen. Dies wird in zahlreichen Fallstudien von willkürlichen Flussnetzwerken, funktionalen Modellen von Axialkompressoren bis hin zu substrukturierten Infrastruktursystemen mit mehreren tausend Einzelkomponenten demonstriert

    The Application of Driver Models in the Safety Assessment of Autonomous Vehicles: A Survey

    Full text link
    Driver models play a vital role in developing and verifying autonomous vehicles (AVs). Previously, they are mainly applied in traffic flow simulation to model realistic driver behavior. With the development of AVs, driver models attract much attention again due to their potential contributions to AV certification. The simulation-based testing method is considered an effective measure to accelerate AV testing due to its safe and efficient characteristics. Nonetheless, realistic driver models are prerequisites for valid simulation results. Additionally, an AV is assumed to be at least as safe as a careful and competent driver. Therefore, driver models are inevitable for AV safety assessment. However, no comparison or discussion of driver models is available regarding their utility to AVs in the last five years despite their necessities in the release of AVs. This motivates us to present a comprehensive survey of driver models in the paper and compare their applicability. Requirements for driver models in terms of their application to AV safety assessment are discussed. A summary of driver models for simulation-based testing and AV certification is provided. Evaluation metrics are defined to compare their strength and weakness. Finally, an architecture for a careful and competent driver model is proposed. Challenges and future work are elaborated. This study gives related researchers especially regulators an overview and helps them to define appropriate driver models for AVs

    Surrogate-Assisted Unified Optimization Framework for Investigating Marine Structural Design Under Information Uncertainty.

    Full text link
    Structural decisions made in the early stages of marine systems design can have a large impact on future acquisition, maintenance and life-cycle costs. However, owing to the unique nature of early stage marine system design, these critical structure decisions are often made on the basis of incomplete information or knowledge about the design. When coupled with design optimization analysis, the complex, uncertain early stage design environment makes it very difficult to deliver a quantified trade-off analysis for decision making. This work presents a novel decision support method that integrates design optimization, high-fidelity analysis, and modeling of information uncertainty for early stage design and analysis. To support this method this dissertation improves the design optimization methods for marine structures by proposing several novel surrogate modeling techniques and strategies. The proposed work treats the uncertainties that are sourced from limited information in a non-statistical interval uncertainty form. This interval uncertainty is treated as an objective function in an optimization framework in order to explore the impact of information uncertainty on structural design performance. In this examination, the potential structural weight penalty regarding information uncertainty can be quickly identified in early stage, avoiding costly redesign later in the design. This dissertation then continues to explore a balanced computational structure between fidelity and efficiency. A proposed novel variable fidelity approach can be applied to wisely allocate expensive high-fidelity computational simulations. In achieving the proposed capabilities for design optimization, several surrogate modeling methods are developed concerning worst-case estimation, clustered multiple meta-modeling, and mixed variable modeling techniques. These surrogate methods have been demonstrated to significantly improve the efficiency of optimizer in dealing with the challenges of early stage marine structure design.PhDNaval Architecture and Marine EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133365/1/yanliuch_1.pd

    Developing a Computer Vision-Based Decision Support System for Intersection Safety Monitoring and Assessment of Vulnerable Road Users

    Get PDF
    Vision-based trajectory analysis of road users enables identification of near-crash situations and proactive safety monitoring. The two most widely used sur-rogate safety measures (SSMs), time-to-collision (TTC) and post-encroachment time (PET)—and a recent variant form of TTC, relative time-to-collision (RTTC)—were investigated using real-world video data collected at ten signalized intersections in the city of San Diego, California. The performance of these SSMs was compared for the purpose of evaluating pedestrian and bicyclist safety. Prediction of potential trajectory intersection points was performed to calculate TTC for every interacting object, and the average of TTC for every two objects in critical situations was calculated. PET values were estimated by observing potential intersection points, and frequencies of events were estimated in three critical levels. Although RTTC provided useful information regarding the relative distance between objects in time, it was found that in certain conditions where objects are far from each other, the interaction between the objects was incorrectly flagged as critical based on a small RTTC. Comparison of PET, TTC, and RTTC for different critical classes also showed that several interactions were identified as critical using one SSM but not critical using a different SSM. These findings suggest that safety evaluations should not solely rely on a single SSM, and instead a combination of different SSMs should be considered to ensure the reliability of evaluations. Video data analysis was conducted to develop object detection and tracking models for automatic identification of vehicles, bicycles, and pedestrians. Outcomes of machine vision models were employed along with SSMs to build a decision support system for safety assessment of vulnerable road users at signalized intersections. Promising results from the decision support system showed that automated safety evaluations can be performed to proactively identify critical events. It also showed challenges as well as future directions to enhance the performance of the system

    Multi-objective Optimization in Traffic Signal Control

    Get PDF
    Traffic Signal Control systems are one of the most popular Intelligent Transport Systems and they are widely used around the world to regulate traffic flow. Recently, complex optimization techniques have been applied to traffic signal control systems to improve their performance. Traffic simulators are one of the most popular tools to evaluate the performance of a potential solution in traffic signal optimization. For that reason, researchers commonly optimize traffic signal timing by using simulation-based approaches. Although evaluating solutions using microscopic traffic simulators has several advantages, the simulation is very time-consuming. Multi-objective Evolutionary Algorithms (MOEAs) are in many ways superior to traditional search methods. They have been widely utilized in traffic signal optimization problems. However, running MOEAs on traffic optimization problems using microscopic traffic simulators to estimate the effectiveness of solutions is time-consuming. Thus, MOEAs which can produce good solutions at a reasonable processing time, especially at an early stage, is required. Anytime behaviour of an algorithm indicates its ability to provide as good a solution as possible at any time during its execution. Therefore, optimization approaches which have good anytime behaviour are desirable in evaluation traffic signal optimization. Moreover, small population sizes are inevitable for scenarios where processing capabilities are limited but require quick response times. In this work, two novel optimization algorithms are introduced that improve anytime behaviour and can work effectively with various population sizes. NS-LS is a hybrid of Non-dominated Sorting Genetic Algorithm II (NSGA-II) and a local search which has the ability to predict a potential search direction. NS-LS is able to produce good solutions at any running time, therefore having good anytime behaviour. Utilizing a local search can help to accelerate the convergence rate, however, computational cost is not considered in NS-LS. A surrogate-assisted approach based on local search (SA-LS) which is an enhancement of NS-LS is also introduced. SA-LS uses a surrogate model constructed using solutions which already have been evaluated by a traffic simulator in previous generations. NS-LS and SA-LS are evaluated on the well-known Benchmark test functions: ZDT1 and ZDT2, and two real-world traffic scenarios: Andrea Costa and Pasubio. The proposed algorithms are also compared to NSGA-II and Multiobjective Evolutionary Algorithm based on Decomposition (MOEA/D). The results show that NS-LS and SA-LS can effectively optimize traffic signal timings of the studied scenarios. The results also confirm that NS-LS and SA-LS have good anytime behaviour and can work well with different population sizes. Furthermore, SA-LS also showed to produce mostly superior results as compared to NS-LS, NSGA-II, and MOEA/D.Ministry of Education and Training - Vietna

    A Driving Risk Surrogate and Its Application in Car-Following Scenario at Expressway

    Full text link
    Traffic safety is important in reducing death and building a harmonious society. In addition to studies of accident incidences, the perception of driving risk is significant in guiding the implementation of appropriate driving countermeasures. Risk assessment can be conducted in real-time for traffic safety due to the rapid development of communication technology and computing capabilities. This paper aims at the problems of difficult calibration and inconsistent thresholds in the existing risk assessment methods. It proposes a risk assessment model based on the potential field to quantify the driving risk of vehicles. Firstly, virtual energy is proposed as an attribute considering vehicle sizes and velocity. Secondly, the driving risk surrogate(DRS) is proposed based on potential field theory to describe the risk degree of vehicles. Risk factors are quantified by establishing submodels, including an interactive vehicle risk surrogate, a restrictions risk surrogate, and a speed risk surrogate. To unify the risk threshold, acceleration for implementation guidance is derived from the risk field strength. Finally, a naturalistic driving dataset in Nanjing, China, is selected, and 3063 pairs of following naturalistic trajectories are screened out. Based on that, the proposed model and other models use for comparisons are calibrated through the improved particle optimization algorithm. Simulations prove that the proposed model performs better than other algorithms in risk perception and response, car-following trajectory, and velocity estimation. In addition, the proposed model exhibits better car-following ability than existing car-following models

    FuzzTheREST - Intelligent Automated Blackbox RESTful API Fuzzer

    Get PDF
    In recent years, the pervasive influence of technology has deeply intertwined with human life, impacting diverse fields. This relationship has evolved into a dependency, with software systems playing a pivotal role, necessitating a high level of trust. Today, a substantial portion of software is accessed through Application Programming Interfaces, particularly web APIs, which predominantly adhere to the Representational State Transfer architecture. However, this architectural choice introduces a wide range of potential vulnerabilities, which are available and accessible at a network level. The significance of Software testing becomes evident when considering the widespread use of software in various daily tasks that impact personal safety and security, making the identification and assessment of faulty software of paramount importance. In this thesis, FuzzTheREST, a black-box RESTful API fuzzy testing framework, is introduced with the primary aim of addressing the challenges associated with understanding the context of each system under test and conducting comprehensive automated testing using diverse inputs. Operating from a black-box perspective, this fuzzer leverages Reinforcement Learning to efficiently uncover vulnerabilities in RESTful APIs by optimizing input values and combinations, relying on mutation methods for input exploration. The system's value is further enhanced through the provision of a thoroughly documented vulnerability discovery process for the user. This proposal stands out for its emphasis on explainability and the application of RL to learn the context of each API, thus eliminating the necessity for source code knowledge and expediting the testing process. The developed solution adheres rigorously to software engineering best practices and incorporates a novel Reinforcement Learning algorithm, comprising a customized environment for API Fuzzy Testing and a Multi-table Q-Learning Agent. The quality and applicability of the tool developed are also assessed, relying on the results achieved on two case studies, involving the Petstore API and an Emotion Detection module which was part of the CyberFactory#1 European research project. The results demonstrate the tool's effectiveness in discovering vulnerabilities, having found 7 different vulnerabilities and the agents' ability to learn different API contexts relying on API responses while maintaining reasonable code coverage levels.Ultimamente, a influência da tecnologia espalhou-se pela vida humana de uma forma abrangente, afetando uma grande diversidade dos seus aspetos. Com a evolução tecnológica esta acabou por se tornar uma dependência. Os sistemas de software começam assim a desempenhar um papel crucial, o que em contrapartida obriga a um elevado grau de confiança. Atualmente, uma parte substancial do software é implementada em formato de Web APIs, que na sua maioria seguem a arquitetura de transferência de estado representacional. No entanto, esta introduz uma série vulnerabilidade. A importância dos testes de software torna-se evidente quando consideramos o amplo uso de software em várias tarefas diárias que afetam a segurança, elevando ainda mais a importância da identificação e mitigação de falhas de software. Nesta tese é apresentado o FuzzTheREST, uma framework de teste fuzzy de APIs RESTful num modelo caixa preta, com o objetivo principal de abordar os desafios relacionados com a compreensão do contexto de cada sistema sob teste e a realização de testes automatizados usando uma variedade de possíveis valores. Este fuzzer utiliza aprendizagem por reforço de forma a compreender o contexto da API que está sob teste de forma a guiar a geração de valores de teste, recorrendo a métodos de mutação, para descobrir vulnerabilidades nas mesmas. Todo o processo desempenhado pelo sistema é devidamente documentado para que o utilizador possa tomar ações mediante os resultados obtidos. Esta explicabilidade e aplicação de inteligência artificial para aprender o contexto de cada API, eliminando a necessidade de analisar código fonte e acelerando o processo de testagem, enaltece e distingue a solução proposta de outras. A solução desenvolvida adere estritamente às melhores práticas de engenharia de software e inclui um novo algoritmo de aprendizagem por reforço, que compreende um ambiente personalizado para testagem Fuzzy de APIs e um Agente de QLearning com múltiplas Q-tables. A qualidade e aplicabilidade da ferramenta desenvolvida também são avaliadas com base nos resultados obtidos em dois casos de estudo, que envolvem a conhecida API Petstore e um módulo de Deteção de Emoções que fez parte do projeto de investigação europeu CyberFactory#1. Os resultados demonstram a eficácia da ferramenta na descoberta de vulnerabilidades, tendo identificado 7 vulnerabilidades distintas, e a capacidade dos agentes em aprender diferentes contextos de API com base nas respostas da mesma, mantendo níveis de cobertura aceitáveis

    A Data Driven Sequential Learning Framework to Accelerate and Optimize Multi-Objective Manufacturing Decisions

    Full text link
    Manufacturing advanced materials and products with a specific property or combination of properties is often warranted. To achieve that it is crucial to find out the optimum recipe or processing conditions that can generate the ideal combination of these properties. Most of the time, a sufficient number of experiments are needed to generate a Pareto front. However, manufacturing experiments are usually costly and even conducting a single experiment can be a time-consuming process. So, it's critical to determine the optimal location for data collection to gain the most comprehensive understanding of the process. Sequential learning is a promising approach to actively learn from the ongoing experiments, iteratively update the underlying optimization routine, and adapt the data collection process on the go. This paper presents a novel data-driven Bayesian optimization framework that utilizes sequential learning to efficiently optimize complex systems with multiple conflicting objectives. Additionally, this paper proposes a novel metric for evaluating multi-objective data-driven optimization approaches. This metric considers both the quality of the Pareto front and the amount of data used to generate it. The proposed framework is particularly beneficial in practical applications where acquiring data can be expensive and resource intensive. To demonstrate the effectiveness of the proposed algorithm and metric, the algorithm is evaluated on a manufacturing dataset. The results indicate that the proposed algorithm can achieve the actual Pareto front while processing significantly less data. It implies that the proposed data-driven framework can lead to similar manufacturing decisions with reduced costs and time
    corecore