464 research outputs found

    Ergonomic Chair Design by Fusing Qualitative and Quantitative Criteria using Interactive Genetic Algorithms

    Get PDF
    This paper emphasizes the necessity of formally bringing qualitative and quantitative criteria of ergonomic design together, and provides a novel complementary design framework with this aim. Within this framework, different design criteria are viewed as optimization objectives; and design solutions are iteratively improved through the cooperative efforts of computer and user. The framework is rooted in multi-objective optimization, genetic algorithms and interactive user evaluation. Three different algorithms based on the framework are developed, and tested with an ergonomic chair design problem. The parallel and multi-objective approaches show promising results in fitness convergence, design diversity and user satisfaction metrics

    Multi-objective optimisation of reliable product-plant network configuration.

    Get PDF
    Ensuring manufacturing reliability is key to satisfying product orders when production plants are subject to disruptions. Reliability of a supply network is closely related to the redundancy of products as production in disrupted plants can be replaced by alternative plants. However the benefits of incorporating redundancy must be balanced against the costs of doing so. Models in literature are highly case specific and do not consider complex network structures and redundant distributions of products over suppliers, that are evident in empirical literature. In this paper we first develop a simple generic measure for evaluating the reliability of a network of plants in a given product-plant configuration. Second, we frame the problem as a multi-objective evolutionary optimisation model to show that such a measure can be used to optimise the cost-reliability trade off. The model has been applied to a producer’s automotive light and lamp production network using three popular genetic algorithms designed for multi-objective problems, namely, NSGA2, SPEA2 and PAES. Using the model in conjunction with genetic algorithms we were able to find trade off solutions successfully. NSGA2 has achieved the best results in terms of Pareto front spread. Algorithms differed considerably in their performance, meaning that the choice of algorithm has significant impact in the resulting search space exploration

    Fast Machine Unlearning Without Retraining Through Selective Synaptic Dampening

    Full text link
    Machine unlearning, the ability for a machine learning model to forget, is becoming increasingly important to comply with data privacy regulations, as well as to remove harmful, manipulated, or outdated information. The key challenge lies in forgetting specific information while protecting model performance on the remaining data. While current state-of-the-art methods perform well, they typically require some level of retraining over the retained data, in order to protect or restore model performance. This adds computational overhead and mandates that the training data remain available and accessible, which may not be feasible. In contrast, other methods employ a retrain-free paradigm, however, these approaches are prohibitively computationally expensive and do not perform on par with their retrain-based counterparts. We present Selective Synaptic Dampening (SSD), a novel two-step, post hoc, retrain-free approach to machine unlearning which is fast, performant, and does not require long-term storage of the training data. First, SSD uses the Fisher information matrix of the training and forgetting data to select parameters that are disproportionately important to the forget set. Second, SSD induces forgetting by dampening these parameters proportional to their relative importance to the forget set with respect to the wider training data. We evaluate our method against several existing unlearning methods in a range of experiments using ResNet18 and Vision Transformer. Results show that the performance of SSD is competitive with retrain-based post hoc methods, demonstrating the viability of retrain-free post hoc unlearning approaches

    Identifying contributors to supply chain outcomes in a multi-echelon setting: a decentralised approach

    Full text link
    Organisations often struggle to identify the causes of change in metrics such as product quality and delivery duration. This task becomes increasingly challenging when the cause lies outside of company borders in multi-echelon supply chains that are only partially observable. Although traditional supply chain management has advocated for data sharing to gain better insights, this does not take place in practice due to data privacy concerns. We propose the use of explainable artificial intelligence for decentralised computing of estimated contributions to a metric of interest in a multi-stage production process. This approach mitigates the need to convince supply chain actors to share data, as all computations occur in a decentralised manner. Our method is empirically validated using data collected from a real multi-stage manufacturing process. The results demonstrate the effectiveness of our approach in detecting the source of quality variations compared to a centralised approach using Shapley additive explanations

    Topological robustness of the global automotive industry

    Get PDF
    The manufacturing industry is characterized by large-scale interdependent networks as companies buy goods from one another, but do not control or design the overall flow of materials. The result is a complex emergent structure with which companies connect to each other. The topology of this structure impacts the industry’s robustness to disruptions in companies, countries, and regions. In this work, we propose an analysis framework for examining robustness in the manufacturing industry and validate it using an empirical dataset. Focusing on two key angles, suppliers and products, we highlight macroscopic and microscopic characteristics of the network and shed light on vulnerabilities of the system. It is shown that large-scale data on structural interdependencies can be examined with measures based on network science
    • 

    corecore