94 research outputs found

    Assessment of information-driven decision-making in the SME

    Get PDF
    The use of analytics in decision -making processes is a key element for organizations to be competitive. However, experience indicates that many organizations still have not managed to fully understand how to use properly the available data for diagnosing, improving a nd controlling processes or modelling, predicting and discovering business opportunities. This situation is even more exaggerated among small and medium enterprises (SMEs). An essential first step for SMEs to start using analytics is a correct assessment o f their decision -making processes and use of data. This will help them understanding their current situation, seeing the potential of adopting analytical practices and decide their approach to analytics. Therefore, the assessment we propose is managerial a nd strategic; thus, it is not aimed at detecting problems such as: errors in the data to make an invoice, not having the correct version of a drawing in the shop or a wrong date in a project plan... Undoubtedly, t hese issues are very important but they are not the objective. The results from applying the proposed assessment tool in several pilot SMEs are expected to serve as the basis for improving the tool and developing a maturity model and a roadmap for improving their proficiency in information -driven d ecision -makingPostprint (published version

    Diseños factoriales fraccionales aplicación al control de calidad mediante el diseño de productos y procesos

    Get PDF
    Los procesos industriales tienen tres fases diseño del producto del proceso y producción en las que experimentos cuidadosamente diseñados pueden aumentar la calidad del producto y la productividad del proceso. En el capitulo 2 se propone un algoritmo para asignar las variables físicas del experimento a los factores del diseño de forma que el diseño obtenido por proyección sea lo mas informativo posible. En el capitulo 3 se propone un nuevo método mas eficiente y preciso que el hasta ahora conocido para estimar los efectos de las variables sobre la dispersión en la respuesta

    Application of Kansei Engineering to Design an Industrial Enclosure

    Get PDF
    Kansei Engineering (KE) is a technique used to incorporate emotions in the product design process. Its basic purpose is discovering in which way some properties of a product convey certain emotions in its users. It is a quantitative method, and data is typically collected using questionnaires. Japanese researcher Mitsuo Nagamachi is the founder of Kansei Engineering. Products where KE has been successfully app lied include cars, phones, packaging, house appliances, clothes or websites, among others. Kansei Engineering studies typically follow a model with three main steps: (1) spanning the semantic space: defining the responses, those emotions that will be studi ed; (2) spanning the space of properties: deciding on the technical properties of the products that can be freely changed and that might affect the responses (factors in a DOE factorial design) and (3) the synthesis phase, where both spaces are linked (that is, how each factor affects each response is discovered). We claimed that KE is a good example of what Roger W. Hoerl and Ron Snee call statistical engineering: focusing not in advancement of statistics developing new techniques, fine tuning existing ones–but on how current techniques can be best used in a new area. This presentation is a practical application of the ideas exposed there to the design of electrical enclosures. The paper shows how well known statistical methods (DOE, principal component analysis and regression analysis) are used together in conjunction with other non statistical techniques and in the presence of practical real world restrictions to discover how different technical characteristics of the enclosures affect the selected “emotions”Postprint (published version

    Statistical methods in kansei engineering: a case of statistical engineering

    Get PDF
    Kansei Engineering (KE) is a technique used to incorporate emotions in the product design process. Its basic purpose is discovering in which way some properties of a product convey certain emotions in its users. It is a quantitative method, and data is typically collected using questionnaires. Japanese researcher Mitsuo Nagamachi is the founder of Kansei Engineering. Products where KE has been successfully applied include cars, phones, packaging, house appliances, clothes or websites, among others. Kansei Engineering studies typically follow a model with three main steps: (1) spanning the semantic space: defining the responses, those emotions that will be studied; (2) spanning the space of properties: deciding on the technical properties of the products that can be freely changed and that might affect the responses (factors in a factorial design) and (3) the synthesis phase, where both spaces are linked (that is, how each factor affects each response is discovered). The procedure resembles that of an experimental design in an industrial context. However, practitioners of KE are hardly ever statisticians. Many well-known statistical methods are commonly used in KE, such as principal component analysis and regression analysis, but the techniques are sometimes misused. Furthermore, the discipline could benefit from a more extensive use of statistical methods (some of them of higher complexity, but easily implemented with existing statistical software). Statistics is thus essential in Kansei Engineering. But if statisticians do not enter into this arena, others will do, as there is a real need and interest in the topic. Kansei Engineering is a good area of application of what Roger W. Hoerl and Ron Snee call statistical engineering: focusing not in advancement of statistics – developing new techniques, fine tuning existing ones – but on how current techniques can be best used in a new area. The aim of this paper is presenting the fundamentals of Kansei Engineering while giving a practical example of statistical engineering in a promising field.Postprint (published version

    Efficient heuristics for the parallel blocking flow shop scheduling problem

    Get PDF
    We consider the NP-hard problem of scheduling n jobs in F identical parallel flow shops, each consisting of a series of m machines, and doing so with a blocking constraint. The applied criterion is to minimize the makespan, i.e., the maximum completion time of all the jobs in F flow shops (lines). The Parallel Flow Shop Scheduling Problem (PFSP) is conceptually similar to another problem known in the literature as the Distributed Permutation Flow Shop Scheduling Problem (DPFSP), which allows modeling the scheduling process in companies with more than one factory, each factory with a flow shop configuration. Therefore, the proposed methods can solve the scheduling problem under the blocking constraint in both situations, which, to the best of our knowledge, has not been studied previously. In this paper, we propose a mathematical model along with some constructive and improvement heuristics to solve the parallel blocking flow shop problem (PBFSP) and thus minimize the maximum completion time among lines. The proposed constructive procedures use two approaches that are totally different from those proposed in the literature. These methods are used as initial solution procedures of an iterated local search (ILS) and an iterated greedy algorithm (IGA), both of which are combined with a variable neighborhood search (VNS). The proposed constructive procedure and the improved methods take into account the characteristics of the problem. The computational evaluation demonstrates that both of them –especially the IGA– perform considerably better than those algorithms adapted from the DPFSP literature.Peer ReviewedPostprint (author's final draft

    Reducing variability of a critical dimension

    Get PDF
    Peer ReviewedPostprint (author's final draft

    BBVA: la innovación abierta en empresas de servicios

    Get PDF
    La innovación abierta es un camino ideal para encontrar y desarrollar nuevas ideas. Una empresa de servicios, como una entidad financiera, también puede beneficiarse de ella, puesto que, por ejemplo, externalizando la innovación desarrollada internamente puede generar flujos de valor añadido en el mercado. Este artículo muestra por qué y cómo el banco BBVA ha impulsado el concepto de ‘open innovation’, qué nuevas herramientas ha creado como resultado de su esfuerzo en este ámbito y qué estrategia ha seguido para proteger de forma satisfactoria todas las innovaciones surgidasPostprint (published version

    An efficient discrete artificial bee colony algorithm for the blocking flow shop problem with total flowtime minimization

    Get PDF
    This paper presents a high performing Discrete Artificial Bee Colony algorithm for the blocking flow shop problem with flow time criterion. To develop the proposed algorithm, we considered four strategies for the food source phase and two strategies for each of the three remaining phases (employed bees, onlookers and scouts). One of the strategies tested in the food source phase and one implemented in the employed bees phase are new. Both have been proved to be very effective for the problem at hand. The initialization scheme named HPF2(¿, µ) in particular, which is used to construct the initial food sources, is shown in the computational evaluation to be one of the main procedures that allow the DABC_RCT to obtain good solutions for this problem. To find the best configuration of the algorithm, we used design of experiments (DOE). This technique has been used extensively in the literature to calibrate the parameters of the algorithms but not to select its configuration. Comparing it with other algorithms proposed for this problem in the literature demonstrates the effectiveness and superiority of the DABC_RCTPeer ReviewedPostprint (author’s final draft

    Selecting significant effects in factorial designs: Lenth’s method versus the Box-Meyer approach

    Get PDF
    The Lenth method is conceptually simple and probably the most common approach to analyzing the significance of the effects in factorial designs. Here, we compare it with a Bayesian approach proposed by Box and Meyer and which does not appear in the usual software packages. The comparison is made by simulating the results of 4, 8 and 16 run designs in a set of scenarios that mirror practical situations and analyzing the results provided by both methods. Although the results depend on the number of runs and the scenario considered, the use of the Box and Meyer method generally produces better results. © 2018, © 2018 Informa UK Limited, trading as Taylor & Francis Group.Peer ReviewedPostprint (author's final draft

    Consequences of using estimated response values from negligible interactions in factorial designs

    Get PDF
    This article analyzes the increase in the probability of committing type I and type II errors in assessing the significance of the effects when some properly selected runs have not been carried out and their responses have been estimated from the interactions considered null from scratch. This is done by simulating the responses from known models that represent a wide variety of practical situations that the experimenter will encounter; the responses considered to be missing are then estimated and the significance of the effects is assessed. Through comparison with the parameters of the model, the errors are then identified. To assess the significance of the effects when there are missing values, the Box-Meyer method has been used. The conclusions are that 1 missing value in 8 run designs and up to 3 missing values in 16 run designs experiments can be estimated without hardly any notable increase in the probability of error when assessing the significance of the effects.Peer ReviewedPostprint (author's final draft
    corecore