588 research outputs found
The severity of stages estimation during hemorrhage using error correcting output codes method
As a beneficial component with critical impact, computer-aided decision making systems have infiltrated many fields, such as economics, medicine, architecture and agriculture. The latent capabilities for facilitating human work propel high-speed development of such systems. Effective decisions provided by such systems greatly reduce the expense of labor, energy, budget, etc. The computer-aided decision making system for traumatic injuries is one type of such systems that supplies suggestive opinions when dealing with the injuries resulted from accidents, battle, or illness. The functions may involve judging the type of illness, allocating the wounded according to battle injuries, deciding the severity of symptoms for illness or injuries, managing the resources in the context of traumatic events, etc. The proposed computer-aided decision making system aims at estimating the severity of blood volume loss. Specifically speaking, accompanying many traumatic injuries, severe hemorrhage, a potentially life-threatening condition that requires immediate treatment, is a significant loss of blood volume in process resulting in decreased blood and oxygen perfusion of vital organs. Hemorrhage and blood loss can occur in different levels such as mild, moderate, or severe. Our proposed system will assist physicians by estimating information such as the severity of blood volume loss and hemorrhage , so that timely measures can be taken to not only save lives but also reduce the long-term complications as well as the cost caused by unmatched operations and treatments. The general framework of the proposed research contains three tasks and many novel and transformative concepts are integrated into the system. First is the preprocessing of the raw signals. In this stage, adaptive filtering is adopted and customized to filter noise, and two detection algorithms (QRS complex detection and Systolic/Diastolic wave detection) are designed. The second process is to extract features. The proposed system combines features from time domain, frequency domain, nonlinear analysis, and multi-model analysis to better represent the patterns when hemorrhage happens. Third, a machine learning algorithm is designed for classification of patterns. A novel machine learning algorithm, as a new version of error correcting output code (ECOC), is designed and investigated for high accuracy and real-time decision making. The features and characteristics of this machine learning method are essential for the proposed computer-aided trauma decision making system. The proposed system is tested agasint Lower Body Negative Pressure (LBNP) dataset, and the results indicate the accuracy and reliability of the proposed system
Conformal Predictions Enhanced Expert-guided Meshing with Graph Neural Networks
Computational Fluid Dynamics (CFD) is widely used in different engineering
fields, but accurate simulations are dependent upon proper meshing of the
simulation domain. While highly refined meshes may ensure precision, they come
with high computational costs. Similarly, adaptive remeshing techniques require
multiple simulations and come at a great computational cost. This means that
the meshing process is reliant upon expert knowledge and years of experience.
Automating mesh generation can save significant time and effort and lead to a
faster and more efficient design process. This paper presents a machine
learning-based scheme that utilizes Graph Neural Networks (GNN) and expert
guidance to automatically generate CFD meshes for aircraft models. In this
work, we introduce a new 3D segmentation algorithm that outperforms two
state-of-the-art models, PointNet++ and PointMLP, for surface classification.
We also present a novel approach to project predictions from 3D mesh
segmentation models to CAD surfaces using the conformal predictions method,
which provides marginal statistical guarantees and robust uncertainty
quantification and handling. We demonstrate that the addition of conformal
predictions effectively enables the model to avoid under-refinement, hence
failure, in CFD meshing even for weak and less accurate models. Finally, we
demonstrate the efficacy of our approach through a real-world case study that
demonstrates that our automatically generated mesh is comparable in quality to
expert-generated meshes and enables the solver to converge and produce accurate
results. Furthermore, we compare our approach to the alternative of adaptive
remeshing in the same case study and find that our method is 5 times faster in
the overall process of simulation. The code and data for this project are made
publicly available at https://github.com/ahnobari/AutoSurf
Optimization techniques in respiratory control system models
One of the most complex physiological systems whose modeling is still an open study is the respiratory control system where different models have been proposed based on the criterion of minimizing the work of breathing (WOB). The aim of this study is twofold: to compare two known models of the respiratory control system which set the breathing pattern based on quantifying the respiratory work; and to assess the influence of using direct-search or evolutionary optimization algorithms on adjustment of model parameters. This study was carried out using experimental data from a group of healthy volunteers under CO2 incremental inhalation, which were used to adjust the model parameters and to evaluate how much the equations of WOB follow a real breathing pattern. This breathing pattern was characterized by the following variables: tidal volume, inspiratory and expiratory time duration and total minute ventilation. Different optimization algorithms were considered to determine the most appropriate model from physiological viewpoint. Algorithms were used for a double optimization: firstly, to minimize the WOB and secondly to adjust model parameters. The performance of optimization algorithms was also evaluated in terms of convergence rate, solution accuracy and precision. Results showed strong differences in the performance of optimization algorithms according to constraints and topological features of the function to be optimized. In breathing pattern optimization, the sequential quadratic programming technique (SQP) showed the best performance and convergence speed when respiratory work was low. In addition, SQP allowed to implement multiple non-linear constraints through mathematical expressions in the easiest way. Regarding parameter adjustment of the model to experimental data, the evolutionary strategy with covariance matrix and adaptation (CMA-ES) provided the best quality solutions with fast convergence and the best accuracy and precision in both models. CMAES reached the best adjustment because of its good performance on noise and multi-peaked fitness functions. Although one of the studied models has been much more commonly used to simulate respiratory response to CO2 inhalation, results showed that an alternative model has a more appropriate cost function to minimize WOB from a physiological viewpoint according to experimental data.Postprint (author's final draft
Trustworthy clinical AI solutions: a unified review of uncertainty quantification in deep learning models for medical image analysis
The full acceptance of Deep Learning (DL) models in the clinical field is
rather low with respect to the quantity of high-performing solutions reported
in the literature. Particularly, end users are reluctant to rely on the rough
predictions of DL models. Uncertainty quantification methods have been proposed
in the literature as a potential response to reduce the rough decision provided
by the DL black box and thus increase the interpretability and the
acceptability of the result by the final user. In this review, we propose an
overview of the existing methods to quantify uncertainty associated to DL
predictions. We focus on applications to medical image analysis, which present
specific challenges due to the high dimensionality of images and their quality
variability, as well as constraints associated to real-life clinical routine.
We then discuss the evaluation protocols to validate the relevance of
uncertainty estimates. Finally, we highlight the open challenges of uncertainty
quantification in the medical field
A review of probabilistic forecasting and prediction with machine learning
Predictions and forecasts of machine learning models should take the form of
probability distributions, aiming to increase the quantity of information
communicated to end users. Although applications of probabilistic prediction
and forecasting with machine learning models in academia and industry are
becoming more frequent, related concepts and methods have not been formalized
and structured under a holistic view of the entire field. Here, we review the
topic of predictive uncertainty estimation with machine learning algorithms, as
well as the related metrics (consistent scoring functions and proper scoring
rules) for assessing probabilistic predictions. The review covers a time period
spanning from the introduction of early statistical (linear regression and time
series models, based on Bayesian statistics or quantile regression) to recent
machine learning algorithms (including generalized additive models for
location, scale and shape, random forests, boosting and deep learning
algorithms) that are more flexible by nature. The review of the progress in the
field, expedites our understanding on how to develop new algorithms tailored to
users' needs, since the latest advancements are based on some fundamental
concepts applied to more complex algorithms. We conclude by classifying the
material and discussing challenges that are becoming a hot topic of research.Comment: 83 pages, 5 figure
Joint learning from multiple information sources for biological problems
Thanks to technological advancements, more and more biological data havebeen generated in recent years. Data availability offers unprecedented opportunities to look at the same problem from multiple aspects. It also unveils a more global view of the problem that takes into account the intricated inter-play between the involved molecules/entities. Nevertheless, biological datasets are biased, limited in quantity, and contain many false-positive samples. Such challenges often drastically downgrade the performance of a predictive model on unseen data and, thus, limit its applicability in real biological studies.
Human learning is a multi-stage process in which we usually start with simple things. Through the accumulated knowledge over time, our cognition ability extends to more complex concepts. Children learn to speak simple words before being able to formulate sentences. Similarly, being able to speak correct sentences supports our learning to speak correct and meaningful paragraphs, etc. Generally, knowledge acquired from related learning tasks would help boost our learning capability in the current task. Motivated by such a phenomenon, in this thesis, we study supervised machine learning models for bioinformatics problems that can improve their performance through exploiting multiple related knowledge sources. More specifically, we concern with ways to enrich the supervised modelsâ knowledge base with publicly available related data to enhance the computational modelsâ prediction performance.
Our work shares commonality with existing works in multimodal learning, multi-task learning, and transfer learning. Nevertheless, there are certain differences in some cases. Besides the proposed architectures, we present large-scale experiment setups with consensus evaluation metrics along with the creation and release of large datasets to showcase our approachesâ superiority. Moreover, we add case studies with detailed analyses in which we place no simplified assumptions to demonstrate the systemsâ utilities in realistic application scenarios. Finally, we develop and make available an easy-to-use website for non-expert users to query the modelâs generated prediction results to facilitate field expertsâ assessments and adaptation. We believe that our work serves as one of the first steps in bridging the gap between âComputer Scienceâ and âBiologyâ that will open a new era of fruitful collaboration between computer scientists and biological field experts
- âŠ