175,157 research outputs found

    Case Based Reasoning and TRIZ : a coupling for Innovative conception in Chemical Engineering

    Get PDF
    With the evolutions of the surrounding world market, researchers and engineers have to propose technical innovations. Nevertheless, Chemical Engineering community demonstrates a small interest for innovation compared to other engineering fields. In this paper, an approach to accelerate inventive preliminary design for Chemical Engineering is presented. This approach uses Case Based Reasoning (CBR) method to model, to capture, to store and to make available the knowledge deployed during design. CBR is a very interesting method coming from Artificial Intelligence, for routine design. Indeed, in CBR the main assumption is that a new problem of design can be solved with the help of past successful ones. Consequently, the problem solving process is based on past successful solutions therefore the design is accelerated but creativity is limited and not stimulated. Our approach is an extension of the CBR method from routine design to inventive design. One of the main drawbacks of this method is that it is restricted in one particular domain of application. To propose inventive solution, the level of abstraction for problem resolution must be increased. For this reason CBR is coupled with the TRIZ theory (Russian acronym for Theory of solving inventive problem). TRIZ is a problem solving method that increases the ability to solve creative problems thanks to its capacity to give access to the best practices in all the technical domains. The proposed synergy between CBR and TRIZ combines the main advantages of CBR (ability to store and to reuse rapidly knowledge) and those of TRIZ (no trade off during resolution, inventive solutions). Based on this synergy, a tool is developed and a mere example is treated

    Compact Personalized Models for Neural Machine Translation

    Full text link
    We propose and compare methods for gradient-based domain adaptation of self-attentive neural machine translation models. We demonstrate that a large proportion of model parameters can be frozen during adaptation with minimal or no reduction in translation quality by encouraging structured sparsity in the set of offset tensors during learning via group lasso regularization. We evaluate this technique for both batch and incremental adaptation across multiple data sets and language pairs. Our system architecture - combining a state-of-the-art self-attentive model with compact domain adaptation - provides high quality personalized machine translation that is both space and time efficient.Comment: Published at the 2018 Conference on Empirical Methods in Natural Language Processin
    corecore