92 research outputs found

    Machine Learning in Tribology

    Get PDF
    Tribology has been and continues to be one of the most relevant fields, being present in almost all aspects of our lives. The understanding of tribology provides us with solutions for future technical challenges. At the root of all advances made so far are multitudes of precise experiments and an increasing number of advanced computer simulations across different scales and multiple physical disciplines. Based upon this sound and data-rich foundation, advanced data handling, analysis and learning methods can be developed and employed to expand existing knowledge. Therefore, modern machine learning (ML) or artificial intelligence (AI) methods provide opportunities to explore the complex processes in tribological systems and to classify or quantify their behavior in an efficient or even real-time way. Thus, their potential also goes beyond purely academic aspects into actual industrial applications. To help pave the way, this article collection aimed to present the latest research on ML or AI approaches for solving tribology-related issues generating true added value beyond just buzzwords. In this sense, this Special Issue can support researchers in identifying initial selections and best practice solutions for ML in tribology

    A Methodological Approach to Knowledge-Based Engineering Systems for Manufacturing

    Get PDF
    A survey of implementations of the knowledge-based engineering approach in different technological sectors is presented. The main objectives and techniques of examined applications are pointed out to illustrate the trends and peculiarities for a number of manufacturing field. Existing methods for the development of these engineering systems are then examined in order to identify critical aspects when applied to manufacturing. A new methodological approach is proposed to overcome some specific limitations that emerged from the above-mentioned survey. The aim is to provide an innovative method for the implementation of knowledge-based engineering applications in the field of industrial production. As a starting point, the field of application of the system is defined using a spatial representation. The conceptual design phase is carried out with the aid of a matrix structure containing the most relevant elements of the system and their relations. In particular, objectives, descriptors, inputs and actions are defined and qualified using categorical attributes. The proposed method is then applied to three case studies with different locations in the applicability space. All the relevant elements of the detailed implementation of these systems are described. The relations with assumptions made during the design are highlighted to validate the effectiveness of the proposed method. The adoption of case studies with notably different applications also reveals the versatility in the application of the method

    Advances in Robotics, Automation and Control

    Get PDF
    The book presents an excellent overview of the recent developments in the different areas of Robotics, Automation and Control. Through its 24 chapters, this book presents topics related to control and robot design; it also introduces new mathematical tools and techniques devoted to improve the system modeling and control. An important point is the use of rational agents and heuristic techniques to cope with the computational complexity required for controlling complex systems. Through this book, we also find navigation and vision algorithms, automatic handwritten comprehension and speech recognition systems that will be included in the next generation of productive systems developed by man

    Efficient Decision Support Systems

    Get PDF
    This series is directed to diverse managerial professionals who are leading the transformation of individual domains by using expert information and domain knowledge to drive decision support systems (DSSs). The series offers a broad range of subjects addressed in specific areas such as health care, business management, banking, agriculture, environmental improvement, natural resource and spatial management, aviation administration, and hybrid applications of information technology aimed to interdisciplinary issues. This book series is composed of three volumes: Volume 1 consists of general concepts and methodology of DSSs; Volume 2 consists of applications of DSSs in the biomedical domain; Volume 3 consists of hybrid applications of DSSs in multidisciplinary domains. The book is shaped upon decision support strategies in the new infrastructure that assists the readers in full use of the creative technology to manipulate input data and to transform information into useful decisions for decision makers

    Adaptive Simulation Modelling Using The Digital Twin Paradigm

    Get PDF
    Structural Health Monitoring (SHM) involves the application of qualified standards, by competent people, using appropriate processes and procedures throughout the struc- ture’s life cycle, from design to decommissioning. The main goal is to ensure that through an ongoing process of risk management, the structure’s continued fitness-for-purpose (FFP) is maintained – allowing for optimal use of the structure with a minimal chance of downtime and catastrophic failure. While undertaking the SHM task, engineers use model(s) to predict the risk to the structure from degradation mechanisms such as corrosion and cracking. These predictive models are either physics-based, data-driven or hybrid based. The process of building these predictive models tends to involve processing some input parameters related to the material properties (e.g.: mass density, modulus of elasticity, polarisation current curve, etc) or/and the environment, to calibrate the model and using them for the predictive simulation. So, the accuracy of the predictions is very much dependent upon the input data describing the properties of the materials and/or the environmental conditions the structure experiences. For the structure(s) with non-uniform and complex degradation behaviour, this pro- cess is repeated over the life-time of the structure(s), i.e., when each new survey is per- formed (or new data is available) and then the survey data are used to infer changes in the material or environmental properties. This conventional parameter tuning and updat- ing approach is computationally expensive and time-consuming, as multi-simulations are needed and manual intervention is expected to determine the optimal model parameters. There is therefore a need for a fundamental paradigm shift to address the shortcomings of conventional approaches. The Digital Twin (DT) offers such a paradigm shift in that it integrates ultra-high fidelity simulation model(s) with other related structural data, to mirror the structural behaviour of its corresponding physical twin. DT’s inherent ability to handle large data allows for the inclusion of an evolving set of data relating to the struc- ture with time as well as provides for the adaptation of the simulation model with very little need for human intervention. This research project investigated DT as an alternative to the existing model calibration and adaptation approach. It developed a design of experiment platform for online model validation and adaptation (i.e., parameter updating) solver(s) within the Digital Twin paradigm. The design of experimental platform provided a basis upon which an approach based on the creation of surrogates and reduced order model (ROM)-assisted parameter search were developed for improving the efficiency of model calibration and adaptation. Furthermore, the developed approach formed a basis for developing solvers which pro- vide for the self-calibration and self-adaptation capability required for the prediction and analysis of an asset’s structural behaviour over time. The research successfully demonstrated that such solvers can be used to efficiently calibrate ultra-high-fidelity simulation model within a DT environment for the accurate prediction of the status of a real-world engineering structure

    A new knowledge sourcing framework to support knowledge-based engineering development

    Get PDF
    New trends in Knowledge-Based Engineering (KBE) highlight the need for decoupling the automation aspect from the knowledge management side of KBE. In this direction, some authors argue that KBE is capable of effectively capturing, retaining and reusing engineering knowledge. However, there are some limitations associated with some aspects of KBE that present a barrier to deliver the knowledge sourcing process requested by the industry. To overcome some of these limitations this research proposes a new methodology for efficient knowledge capture and effective management of the complete knowledge life cycle. Current knowledge capture procedures represent one of the main constraints limiting the wide use of KBE in the industry. This is due to the extraction of knowledge from experts in high cost knowledge capture sessions. To reduce the amount of time required from experts to extract relevant knowledge, this research uses Artificial Intelligence (AI) techniques capable of generating new knowledge from company assets. Moreover the research reported here proposes the integration of AI methods and experts increasing as a result the accuracy of the predictions and the reliability of using advanced reasoning tools. The proposed knowledge sourcing framework integrates two features: (i) use of advanced data mining tools and expert knowledge to create new knowledge from raw data, (ii) adoption of a well-established and reliable methodology to systematically capture, transfer and reuse engineering knowledge. The methodology proposed in this research is validated through the development and implementation of two case studies aiming at the optimisation of wing design concepts. The results obtained in both use cases proved the extended KBE capability for fast and effective knowledge sourcing. This evidence was provided by the experts working in the development of each of the case studies through the implementation of structured quantitative and qualitative analyses

    Big Data in Management Research. Exploring New Avenues

    Get PDF

    Big Data in Management Research. Exploring New Avenues

    Get PDF

    Managing computational complexity through using partitioning, approximation and coordination

    Get PDF
    Problem: Complex systems are composed of many interdependent subsystems with a level of complexity that exceeds the ability of a single designer. One way to address this problem is to partition the complex design problem into smaller, more manageable design tasks that can be handled by multiple design teams. Partitioning-based design methods are decision support tools that provide mathematical foundations, and computational methods to create such design processes. Managing the interdependency among these subsystems is crucial and a successful design process should meet the requirements of the whole system which needs coordinating the solutions for all the partitions after all. Approach: Partitioning and coordination should be performed to break down the system into subproblems, solve them and put these solutions together to come up with the ultimate system design. These two tasks of partitioning-coordinating are computationally demanding. Most of the proposed approaches are either computationally very expensive or applicable to only a narrow class of problems. These approaches also use exact methods and eliminate the uncertainty. To manage the computational complexity and uncertainty, we approximate each subproblem after partitioning the whole system. In engineering design, one way to approximate the reality is using surrogate models (SM) to replace the functions which are computationally expensive to solve. This task also is added to the proposed computational framework. Also, to automate the whole process, creating a knowledge-based reusable template for each of these three steps is required. Therefore, in this dissertation, we first partition/decompose the complex system, then, we approximate the subproblem of each partition. Afterwards, we apply coordination methods to guide the solutions of the partitions toward the ultimate integrated system design. Validation: The partitioning-approximation-coordination design approach is validated using the validation square approach that consists of theoretical and empirical validation. Empirical validation of the design architecture is carried out using two industry-driven problems namely the a hot rod rolling problem’, ‘a dam network design problem’, ‘a crime prediction problem’ and ‘a green supply chain design problem’. Specific sub-problems are formulated within these problem domains to address various research questions identified in this dissertation. Contributions: The contributions from the dissertation are categorized into new knowledge in five research domains: • Creating an approach to building an ensemble of surrogate models when the data is limited – when the data is limited, replacing computationally expensive simulations with accurate, low-dimensional, and rapid surrogates is very important but non-trivial. Therefore, a cross-validation-based ensemble modeling approach is proposed. • Using temporal and spatial analysis to manage the uncertainties - when the data is time-based (for example, in meteorological data analysis) and when we are dealing with geographical data (for example, in geographical information systems data analysis), instead of feature-based data analysis time series analysis and spatial statistics are required, respectively. Therefore, when the simulations are for time and space-based data, surrogate models need to be time and space-based. In surrogate modeling, there is a gap in time and space-based models which we address in this dissertation. We created, applied and evaluated the effectiveness of these models for a dam network planning and a crime prediction problem. • Removing assumptions regarding the demand distributions in green supply chain networks – in the existent literature for supply chain network design, there are always assumptions about the distribution of the demand. We remove this assumption in the partition-approximate-compose of the green supply chain design problem. • Creating new knowledge by proposing a coordination approach for a partitioned and approximated network design. A green supply chain under online (pull economy) and in-person (push economy) shopping channels is designed to demonstrate the utility of the proposed approach
    • …
    corecore