470 research outputs found

    Ergonomic Chair Design by Fusing Qualitative and Quantitative Criteria using Interactive Genetic Algorithms

    Get PDF
    This paper emphasizes the necessity of formally bringing qualitative and quantitative criteria of ergonomic design together, and provides a novel complementary design framework with this aim. Within this framework, different design criteria are viewed as optimization objectives; and design solutions are iteratively improved through the cooperative efforts of computer and user. The framework is rooted in multi-objective optimization, genetic algorithms and interactive user evaluation. Three different algorithms based on the framework are developed, and tested with an ergonomic chair design problem. The parallel and multi-objective approaches show promising results in fitness convergence, design diversity and user satisfaction metrics

    Multi-objective optimisation of reliable product-plant network configuration.

    Get PDF
    Ensuring manufacturing reliability is key to satisfying product orders when production plants are subject to disruptions. Reliability of a supply network is closely related to the redundancy of products as production in disrupted plants can be replaced by alternative plants. However the benefits of incorporating redundancy must be balanced against the costs of doing so. Models in literature are highly case specific and do not consider complex network structures and redundant distributions of products over suppliers, that are evident in empirical literature. In this paper we first develop a simple generic measure for evaluating the reliability of a network of plants in a given product-plant configuration. Second, we frame the problem as a multi-objective evolutionary optimisation model to show that such a measure can be used to optimise the cost-reliability trade off. The model has been applied to a producer’s automotive light and lamp production network using three popular genetic algorithms designed for multi-objective problems, namely, NSGA2, SPEA2 and PAES. Using the model in conjunction with genetic algorithms we were able to find trade off solutions successfully. NSGA2 has achieved the best results in terms of Pareto front spread. Algorithms differed considerably in their performance, meaning that the choice of algorithm has significant impact in the resulting search space exploration

    Fast Machine Unlearning Without Retraining Through Selective Synaptic Dampening

    Full text link
    Machine unlearning, the ability for a machine learning model to forget, is becoming increasingly important to comply with data privacy regulations, as well as to remove harmful, manipulated, or outdated information. The key challenge lies in forgetting specific information while protecting model performance on the remaining data. While current state-of-the-art methods perform well, they typically require some level of retraining over the retained data, in order to protect or restore model performance. This adds computational overhead and mandates that the training data remain available and accessible, which may not be feasible. In contrast, other methods employ a retrain-free paradigm, however, these approaches are prohibitively computationally expensive and do not perform on par with their retrain-based counterparts. We present Selective Synaptic Dampening (SSD), a novel two-step, post hoc, retrain-free approach to machine unlearning which is fast, performant, and does not require long-term storage of the training data. First, SSD uses the Fisher information matrix of the training and forgetting data to select parameters that are disproportionately important to the forget set. Second, SSD induces forgetting by dampening these parameters proportional to their relative importance to the forget set with respect to the wider training data. We evaluate our method against several existing unlearning methods in a range of experiments using ResNet18 and Vision Transformer. Results show that the performance of SSD is competitive with retrain-based post hoc methods, demonstrating the viability of retrain-free post hoc unlearning approaches

    Identifying contributors to supply chain outcomes in a multi-echelon setting: a decentralised approach

    Full text link
    Organisations often struggle to identify the causes of change in metrics such as product quality and delivery duration. This task becomes increasingly challenging when the cause lies outside of company borders in multi-echelon supply chains that are only partially observable. Although traditional supply chain management has advocated for data sharing to gain better insights, this does not take place in practice due to data privacy concerns. We propose the use of explainable artificial intelligence for decentralised computing of estimated contributions to a metric of interest in a multi-stage production process. This approach mitigates the need to convince supply chain actors to share data, as all computations occur in a decentralised manner. Our method is empirically validated using data collected from a real multi-stage manufacturing process. The results demonstrate the effectiveness of our approach in detecting the source of quality variations compared to a centralised approach using Shapley additive explanations

    Supply Networks as Complex Systems: A Network-Science-Based Characterization

    Get PDF
    Outsourcing, internationalization, and complexity characterize today's aerospace supply chains, making aircraft manufacturers structurally dependent on each other. Despite several complexity-related supply chain issues reported in the literature, aerospace supply chain structure has not been studied due to a lack of empirical data and suitable analytical toolsets for studying system structure. In this paper, we assemble a large-scale empirical data set on the supply network of Airbus and apply the new science of networks to analyze how the industry is structured. Our results show that the system under study is a network, formed by communities connected by hub firms. Hub firms also tend to connect to each other, providing cohesiveness, yet making the network vulnerable to disruptions in them. We also show how network science can be used to identify firms that are operationally critical and that are key to disseminating information

    Bayesian Autoencoders for Drift Detection in Industrial Environments

    Get PDF
    Autoencoders are unsupervised models which have been used for detecting anomalies in multi-sensor environments. A typical use includes training a predictive model with data from sensors operating under normal conditions and using the model to detect anomalies. Anomalies can come either from real changes in the environment (real drift) or from faulty sensory devices (virtual drift); however, the use of Autoencoders to distinguish between different anomalies has not yet been considered. To this end, we first propose the development of Bayesian Autoencoders to quantify epistemic and aleatoric uncertainties. We then test the Bayesian Autoencoder using a real-world industrial dataset for hydraulic condition monitoring. The system is injected with noise and drifts, and we have found the epistemic uncertainty to be less sensitive to sensor perturbations as compared to the reconstruction loss. By observing the reconstructed signals with the uncertainties, we gain interpretable insights, and these uncertainties offer a potential avenue for distinguishing real and virtual drifts
    • 

    corecore