12 research outputs found

    A Review on the Development of Fuzzy Classifiers with Improved Interpretability and Accuracy Parameters

    Get PDF
    This review paper of fuzzy classifiers with improved interpretability and accuracy param-eter discussed the most fundamental aspect of very effective and powerful tools in form of probabilistic reasoning, The fuzzy logic concept allows the effective realization of ap-proximate, vague, uncertain, dynamic, and more realistic conditions, which is closer to the actual physical world and human thinking. The fuzzy theory has the competency to catch the lack of preciseness of linguistic terms in a speech of natural language. The fuzzy theory provides a more significant competency to model humans like com-mon-sense reasoning and conclusion making to fuzzy set and rules as good membership function. Also, in this paper reviews discussed the evaluation of the fuzzy set, type-1, type-2, and interval type-2 fuzzy system from traditional Boolean crisp set logic along with interpretability and accuracy issues in the fuzzy system

    External clustering validity index based on chi-squared statistical test

    Get PDF
    Clustering is one of the most commonly used techniques in data mining. Its main goal is to group objects into clusters so that each group contains objects that are more similar to each other than to objects in other clusters. The evaluation of a clustering solution is a task carried out through the application of validity indices. These indices measure the quality of the solution and can be classified as either internal that calculate the quality of the solution through the data of the clusters, or as external indices that measure the quality by means of external information such as the class. Generally, indices from the literature determine their optimal result through graphical representation, whose results could be imprecisely interpreted. The aim of this paper is to present a new external validity index based on the chi-squared statistical test named Chi Index, which presents accurate results that require no further interpretation. Chi Index was analyzed using the clustering results of 3 clustering methods in 47 public datasets. Results indicate a better hit rate and a lower percentage of error against 15 external validity indices from the literature.Ministerio de Economía y Competitividad TIN2014-55894-C2-RMinisterio de Economía y Competitividad TIN2017-88209-C2-2-

    Политравма: определение термина и тактики ведения больных (обзор)

    Get PDF
    Polytrauma is a highly relevant problem from both scientific and clinical perspectives due to its high mortality rate (>20% in young and middle-aged individuals and >45% in the elderly). The lack of consensus in the definition of polytrauma complicates data collection and comparison of available datasets. In addition, selection of the most appropriate management strategy determining the quality of medical care and magnitude of invested resources can be challenging.Aim of the review. To revisit the current definition of polytrauma and define the perspective directions for the diagnosis and management of patients with polytrauma.Material and methods. Based on the data of 93 selected publications, we studied the mortality trends in the trauma and main causes of lethal outcomes, analyzed the polytrauma severity scales and determined their potential flaws, examined the guidelines for choosing the orthosurgical strategy according to the severity of the patient’s condition.Results. The pattern of mortality trends in trauma directly depends on the adequacy of severity assessment and the quality of medical care. The Berlin definition of polytrauma in combination with a mCGS/PTGS scale most accurately classifies polytrauma into four severity groups. For the «stable» patients, the use of primary definitive osteosynthesis with internal fixation (early total care, or ETC) is the gold standard of treatment. For the «borderline» and «unstable» groups, no definitive unified strategy has been adopted. Meanwhile, in «critical» patients, priority is given to general stabilization followed by delayed major surgery (damage control orthopaedics, or DCO), which increases survival.Conclusion. The use of artificial intelligence and machine learning, which have been employed for more specific goals (predicting mortality and several common complications), seems reasonable for planning the management strategy in the «controversial» groups. The use of a clinical decision support system based on a unified patient registry could improve the quality of care for polytrauma, even by less experienced physicians.Политравма сохраняет свою научно-практическую значимость за счет высокого уровня летальности (>20% у лиц молодого и среднего возраста и >45% — у пожилых). Отсутствие единого определения термина «политравма» приводит к затруднениям при систематизации и сравнительном анализе доступных данных. Помимо этого, возникают проблемы выбора тактики ведения больных, которая определяет качество оказываемой медицинской помощи и объем затраченных организацией ресурсов.Цель обзора. Актуализировать определение термина «политравма» и определить перспективные направления в области диагностики и ведения пациентов с политравмой.Материалы и методы. По данным 93-х отобранных публикаций изучили модальность распределения летальности при травме и основные причины; проанализировали шкалы оценки степени тяжести политравмы и определили их потенциальные проблемы; изучили рекомендации по выбору ортохирургической тактики относительно тяжести состояния больного.Результаты. Модальность смертей при травме напрямую зависит от адекватности оценки тяжести состояния и качества организации медицинской помощи. «Берлинское определение» политравмы с дополнительным применением одной из шкал mCGS/PTGS наиболее точно классифицирует политравму на четыре группы тяжести. Для «стабильных» больных применение первичного окончательного остеосинтеза с внутренней фиксацией (ETC) является «золотым стандартом» лечения. Для групп «пограничных» и «нестабильных» не определено однозначной верной тактики. В свою очередь у «критических» больных рекомендуется первоочередная стабилизация общего состояния с последующей отсроченной основной операцией (DCO), которая увеличивает выживаемость.Заключение. Возможным решением проблемы определения тактики ведения для сомнительных групп является использование искусственного интеллекта и машинного обучения, которые уже применимы для более узких проблем (прогнозирование летальности и развития некоторых частых осложнений относительно исходного состояния). Использование системы поддержки принятия клинических решений на основе унифицированного регистра больных позволит повысить качество оказываемой помощи при политравме даже специалистами с малым опытом работы

    A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When?

    Full text link
    Artificial intelligence (AI) models are increasingly finding applications in the field of medicine. Concerns have been raised about the explainability of the decisions that are made by these AI models. In this article, we give a systematic analysis of explainable artificial intelligence (XAI), with a primary focus on models that are currently being used in the field of healthcare. The literature search is conducted following the preferred reporting items for systematic reviews and meta-analyses (PRISMA) standards for relevant work published from 1 January 2012 to 02 February 2022. The review analyzes the prevailing trends in XAI and lays out the major directions in which research is headed. We investigate the why, how, and when of the uses of these XAI models and their implications. We present a comprehensive examination of XAI methodologies as well as an explanation of how a trustworthy AI can be derived from describing AI models for healthcare fields. The discussion of this work will contribute to the formalization of the XAI field.Comment: 15 pages, 3 figures, accepted for publication in the IEEE Transactions on Artificial Intelligenc

    Heterogeneous ensemble models for in-Hospital Mortality Prediction

    Get PDF
    The use of Electronic Health Records data have extensively grown as they become more accessible. In machine learning, they are used as input for a large array of problems, as the records are rich and contain different types of variables, including structured data (e.g., demographics), free text (e.g., medical notes), and time series data. In this work, we explore the use of these different types of data for the task of in-hospital mortality prediction, which seeks to predict the outcome of death for patients admitted at the hos pital. We built several machine learning models, - such as LSTM, TCN, and Logistic Regression for each data type, and combine them into a heterogeneous ensemble model using the stacking strategy. By applying deep learning algorithms of the state-of-the-art in classification tasks and using their predictions as a new representation for our data we could assess whether the classifier ensemble can leverage information extracted from models trained with different data types. Our experiments on a set of 20K ICU stays from the MIMIC-III dataset have shown that the ensemble method brings an increase of three percentage points, achieving an AUROC of 0.853 (95% CI [0.846,0.861]), a TP Rate of 0.800, and a weighted F-Score of 0.795.Com o crescimento da adoção de prontuários eletrônicos, e da acessibilidade da comunidade a esses dados, a área de aprendizado de máquina está fazendo o uso desses dados para a solução de uma vasta gama de problemas. Esses dados são ricos e complexos, e contam com uma diversidade grande de tipos de dados, como dados estruturados (e.g., dados demográficos), texto livre (e.g., exames e prontuário médico) e dados temporais (e.g., medições de sinais vitais). Neste trabalho, buscamos explorar essa diversidade de tipos de dados para a tarefa de predição de mortalidade durante a estadia no hospital. Mais especificamente, usando apenas a janela das primeiras 48h de estadía do paciente. Contruímos diversos modelos de classificação para essa tarefa - incluindo LSTM, TCN e Logistic Regression - para cada tipo de dado existente na nossa base de dados, aplicando algoritmos do estado-da-arte da área de deep learning. Usando o resultado da classifica ção obtido por esses modelos, modelos ensemble foram treinados. Com isso, é possível avaliar se esses modelos conseguem tentar melhorar qualidade da classificação. Nossos experimentos usaram um conjunto de mais de 20mil estadias em UTIs presente na base de dados MIMIC-III, e mostramos que o uso de ensemble melhora a performance final em 3 pontos percentuais, conseguindo um melhor resultado de AUROC de 0,853 (95% IC [0,846; 0,861]), um TP Rate de 0.800, e um weighted F-Score de 0.795
    corecore