21 research outputs found

    Multiple proportion case-basing driven CBRE and its application in the evaluation of possible failure of firms

    Get PDF
    Case-based reasoning (CBR) is a unique tool for the evaluation of possible failure of firms (EOPFOF) for its eases of interpretation and implementation. Ensemble computing, a variation of group decision in society, provides a potential means of improving predictive performance of CBR-based EOPFOF. This research aims to integrate bagging and proportion case-basing with CBR to generate a method of proportion bagging CBR for EOPFOF. Diverse multiple case bases are first produced by multiple case-basing, in which a volume parameter is introduced to control the size of each case base. Then, the classic case retrieval algorithm is implemented to generate diverse member CBR predictors. Majority voting, the most frequently used mechanism in ensemble computing, is finally used to aggregate outputs of member CBR predictors in order to produce final prediction of the CBR ensemble. In an empirical experiment, we statistically validated the results of the CBR ensemble from multiple case bases by comparing them with those of multivariate discriminant analysis, logistic regression, classic CBR, the best member CBR predictor and bagging CBR ensemble. The results from Chinese EOPFOF prior to 3 years indicate that the new CBR ensemble, which significantly improved CBRs predictive ability, outperformed all the comparative methods

    Prediction of industrial equipment Remaining Useful Life by fuzzy similarity and belief function theory

    Get PDF
    We develop a novel prognostic method for estimating the Remaining Useful Life (RUL) of industrial equipment and its uncertainty. The novelty of the work is the combined use of a fuzzy similarity method for the RUL prediction and of Belief Function Theory for uncertainty treatment. This latter allows estimating the uncertainty affecting the RUL predictions even in cases characterized by few available data, in which traditional uncertainty estimation methods tend to fail. From the practical point of view, the maintenance planner can define the maximum acceptable failure probability for the equipment of interest and is informed by the proposed prognostic method of the time at which this probability is exceeded, allowing the adoption of a predictive maintenance approach which takes into account RUL uncertainty. The method is applied to simulated data of creep growth in ferritic steel and to real data of filter clogging taken from a Boiling Water Reactor (BWR) condenser. The obtained results show the effectiveness of the proposed method for uncertainty treatment and its superiority to the Kernel Density Estimation (KDE) and the Mean-Variance Estimation (MVE) methods in terms of reliability and precision of the RUL prediction intervals

    UNCERTAINTY IN MACHINE LEARNING A SAFETY PERSPECTIVE ON BIOMEDICAL APPLICATIONS

    Get PDF
    Uncertainty is an inevitable and essential aspect of the worldwe live in and a fundamental aspect of human decision-making. It is no different in the realm of machine learning. Just as humans seek out additional information and perspectives when faced with uncertainty, machine learning models must also be able to account for and quantify the uncertainty in their predictions. However, the uncertainty quantification in machine learning models is often neglected. By acknowledging and incorporating uncertainty quantification into machine learning models, we can build more reliable and trustworthy systems that are better equipped to handle the complexity of the world and support clinical decisionmaking. This thesis addresses the broad issue of uncertainty quantification in machine learning, covering the development and adaptation of uncertainty quantification methods, their integration in the machine learning development pipeline, and their practical application in clinical decision-making. Original contributions include the development of methods to support practitioners in developing more robust and interpretable models, which account for different sources of uncertainty across the core components of the machine learning pipeline, encompassing data, the machine learning model, and its outputs. Moreover, these machine learning models are designed with abstaining capabilities, enabling them to accept or reject predictions based on the level of uncertainty present. This emphasizes the importance of using classification with rejection option in clinical decision support systems. The effectiveness of the proposed methods was evaluated across databases with physiological signals from medical diagnosis and human activity recognition. The results support that uncertainty quantification was important for more reliable and robust model predictions. By addressing these topics, this thesis aims to improve the reliability and trustworthiness of machine learning models and contribute to fostering the adoption of machineassisted clinical decision-making. The ultimate goal is to enhance the trust and accuracy of models’ predictions and increase transparency and interpretability, ultimately leading to better decision-making across a range of applications.A incerteza é um aspeto inevitável e essencial do mundo em que vivemos e um aspeto fundamental na tomada de decisão humana. Não é diferente no âmbito da aprendizagem automática. Assim como os seres humanos, quando confrontados com um determinado nível de incerteza exploram novas abordagens ou procuram recolher mais informação, também os modelos de aprendizagem automática devem ter a capacidade de ter em conta e quantificar o grau de incerteza nas suas previsões. No entanto, a quantificação da incerteza nos modelos de aprendizagem automática é frequentemente negligenciada. O reconhecimento e incorporação da quantificação de incerteza nos modelos de aprendizagem automática, irá permitir construir sistemas mais fiáveis, melhor preparados para apoiar a tomada de decisão clinica em situações complexas e com maior nível de confiança. Esta tese aborda a ampla questão da quantificação de incerteza na aprendizagem automática, incluindo o desenvolvimento e adaptação de métodos de quantificação de incerteza, a sua integração no pipeline de desenvolvimento de modelos de aprendizagem automática e a sua aplicação prática na tomada de decisão clínica. Nos contributos originais, inclui-se o desenvolvimento de métodos para apoiar os profissionais de desenvolvimento na criação de modelos mais robustos e interpretáveis, que tenham em consideração as diferentes fontes de incerteza nos diversos componenteschave do pipeline de aprendizagem automática: os dados, o modelo de aprendizagem automática e os seus resultados. Adicionalmente, os modelos de aprendizagem automática são construídos com a capacidade de se abster, o que permite aceitar ou rejeitar uma previsão com base no nível de incerteza presente, o que realça a importância da utilização de modelos de classificação com a opção de rejeição em sistemas de apoio à decisão clínica. A eficácia dos métodos propostos foi avaliada em bases de dados contendo sinais fisiológicos provenientes de diagnósticos médicos e reconhecimento de atividades humanas. As conclusões sustentam a importância da quantificação da incerteza nos modelos de aprendizagem automática para obter previsões mais fiáveis e robustas. Desenvolvendo estes tópicos, esta tese pretende aumentar a fiabilidade e credibilidade dos modelos de aprendizagem automática, promovendo a utilização e desenvolvimento dos sistemas de apoio à decisão clínica. O objetivo final é aumentar o grau de confiança e a fiabilidade das previsões dos modelos, bem como, aumentar a transparência e interpretabilidade, proporcionando uma melhor tomada de decisão numa variedade de aplicações
    corecore