25 research outputs found
Self learning strategies for experimental design and response surface optimization
Most preset RSM designs offer ease of implementation and good performance over a wide range of process and design optimization applications. These designs often lack the ability to adapt the design based on the characteristics of application and experimental space so as to reduce the number of experiments necessary. Hence, they are not cost effective for applications where the cost of experimentation is high or when the experimentation resources are limited. In this dissertation, we present a number of self-learning strategies for optimization of different types of response surfaces for industrial experiments with noise, high experimentation cost, and requiring high design optimization performance. The proposed approach is a sequential adaptive experimentation approach which combines concepts from nonlinear optimization, non-parametric regression, statistical analysis, and response surface optimization. The proposed strategies uses the information gained from the previous experiments to design the subsequent experiment by simultaneously reducing the region of interest and identifying factor combinations for new experiments. Its major advantage is the experimentation efficiency such that, for a given response target, it identifies the input factor combination (or containing region) in less number of experiments than the classical designs. Through extensive simulated experiments and real-world case studies, we show that the proposed ASRSM method clearly outperforms the classical CCD and BBD methods, works superior to optimal A- D- and V- optimal designs on average and compares favorably with global optimizations methods including Gaussian Process and RBF
Probabilistic models for patient scheduling
In spite of the success of theoretical appointment scheduling methods, there have been significant failures in practice primarily due to the rapid increase in the number of no-shows and cancelations from the individuals in recent times. These disruptions not only cause inconvenience to the management but also has a significant impact on the revenue, cost and resource utilization. In this research, we develop a hybrid probabilistic model based on logistic regression and Bayesian inference to predict the probability of no-shows in real-time. We also develop two novel non-sequential and sequential optimization models which can effectively use no-show probabilities for scheduling patients. Our integrated prediction and optimization model can be used to enable a precise overbooking strategy to reduce the negative effect of no-shows and fill appointment slots while maintaining short wait times. Using both simulated and real-world data, we demonstrate the effectiveness of the proposed hybrid predictive model and scheduling strategy compared to some of the well-studied approaches available in the literature
ASRSM: A Sequential Experimental Design for Response Surface Optimization
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/96749/1/qre1306.pd
Machine learning tools to improve nonlinear modeling parameters of RC columns
Modeling parameters are essential to the fidelity of nonlinear models of
concrete structures subjected to earthquake ground motions, especially when
simulating seismic events strong enough to cause collapse. This paper addresses
two of the most significant barriers to improving nonlinear modeling provisions
in seismic evaluation standards using experimental data sets: identifying the
most likely mode of failure of structural components, and implementing data
fitting techniques capable of recognizing interdependencies between input
parameters and nonlinear relationships between input parameters and model
outputs. Machine learning tools in the Scikit-learn and Pytorch libraries were
used to calibrate equations and black-box numerical models for nonlinear
modeling parameters (MP) a and b of reinforced concrete columns defined in the
ASCE 41 and ACI 369.1 standards, and to estimate their most likely mode of
failure. It was found that machine learning regression models and machine
learning black-boxes were more accurate than current provisions in the ACI
369.1/ASCE 41 Standards. Among the regression models, Regularized Linear
Regression was the most accurate for estimating MP a, and Polynomial Regression
was the most accurate for estimating MP b. The two black-box models evaluated,
namely the Gaussian Process Regression and the Neural Network (NN), provided
the most accurate estimates of MPs a and b. The NN model was the most accurate
machine learning tool of all evaluated. A multi-class classification tool from
the Scikit-learn machine learning library correctly identified column mode of
failure with 79% accuracy for rectangular columns and with 81% accuracy for
circular columns, a substantial improvement over the classification rules in
ASCE 41-13
Nurse-in-the-Loop Artificial Intelligence for Precision Management of Type 2 Diabetes in a Clinical Trial Utilizing Transfer-Learned Predictive Digital Twin
Background: Type 2 diabetes (T2D) is a prevalent chronic disease with a
significant risk of serious health complications and negative impacts on the
quality of life. Given the impact of individual characteristics and lifestyle
on the treatment plan and patient outcomes, it is crucial to develop precise
and personalized management strategies. Artificial intelligence (AI) provides
great promise in combining patterns from various data sources with nurses'
expertise to achieve optimal care. Methods: This is a 6-month ancillary study
among T2D patients (n = 20, age = 57 +- 10). Participants were randomly
assigned to an intervention (AI, n=10) group to receive daily AI-generated
individualized feedback or a control group without receiving the daily feedback
(non-AI, n=10) in the last three months. The study developed an online
nurse-in-the-loop predictive control (ONLC) model that utilizes a predictive
digital twin (PDT). The PDT was developed using a transfer-learning-based
Artificial Neural Network. The PDT was trained on participants self-monitoring
data (weight, food logs, physical activity, glucose) from the first three
months, and the online control algorithm applied particle swarm optimization to
identify impactful behavioral changes for maintaining the patient's glucose and
weight levels for the next three months. The ONLC provided the intervention
group with individualized feedback and recommendations via text messages. The
PDT was re-trained weekly to improve its performance. Findings: The trained
ONLC model achieved >=80% prediction accuracy across all patients while the
model was tuned online. Participants in the intervention group exhibited a
trend of improved daily steps and stable or improved total caloric and total
carb intake as recommended.Comment: Submitted for revie
Incorporation of Eye-Tracking and Gaze Feedback to Characterize and Improve Radiologist Search Patterns of Chest X-rays: A Randomized Controlled Clinical Trial
Diagnostic errors in radiology often occur due to incomplete visual
assessments by radiologists, despite their knowledge of predicting disease
classes. This insufficiency is possibly linked to the absence of required
training in search patterns. Additionally, radiologists lack consistent
feedback on their visual search patterns, relying on ad-hoc strategies and peer
input to minimize errors and enhance efficiency, leading to suboptimal patterns
and potential false negatives. This study aimed to use eye-tracking technology
to analyze radiologist search patterns, quantify performance using established
metrics, and assess the impact of an automated feedback-driven educational
framework on detection accuracy. Ten residents participated in a controlled
trial focused on detecting suspicious pulmonary nodules. They were divided into
an intervention group (received automated feedback) and a control group.
Results showed that the intervention group exhibited a 38.89% absolute
improvement in detecting suspicious-for-cancer nodules, surpassing the control
group's improvement (5.56%, p-value=0.006). Improvement was more rapid over the
four training sessions (p-value=0.0001). However, other metrics such as speed,
search pattern heterogeneity, distractions, and coverage did not show
significant changes. In conclusion, implementing an automated feedback-driven
educational framework improved radiologist accuracy in detecting suspicious
nodules. The study underscores the potential of such systems in enhancing
diagnostic performance and reducing errors. Further research and broader
implementation are needed to consolidate these promising results and develop
effective training strategies for radiologists, ultimately benefiting patient
outcomes.Comment: Submitted for Review in the Journal of the American College of
Radiology (JACR
Using Bayesian networks for root cause analysis in statistical process control
Despite their fame and capability in detecting out-of-control conditions, control charts are not effective tools for fault diagnosis. There are other techniques in the literature mainly based on process information and control charts patterns to help control charts for root cause analysis. However these methods are limited in practice due to their dependency on the expertise of practitioners. In this study, we develop a network for capturing the cause and effect relationship among chart patterns, process information and possible root causes/assignable causes. This network is then trained under the framework of Bayesian networks and a suggested data structure using process information and chart patterns. The proposed method provides a real time identification of single and multiple assignable causes of failures as well as false alarms while improving itself performance by learning from mistakes. It also has an acceptable performance on missing data. This is demonstrated by comparing the performance of the proposed method with methods like neural nets and K-Nearest Neighbor under extensive simulation studies. (C) 2011 Elsevier Ltd. All rights reserved
One-Step Deadbeat Control of a 5-Link Biped Using Data-Driven Nonlinear Approximation of the Step-to-Step Dynamics
For bipedal robots to walk over complex and constrained environments (e.g., narrow walkways, stepping stones), they have to meet precise control objectives of speed and foot placement at every single step. This control that achieves the objectives precisely at every step is known as one-step deadbeat control. The high dimensionality of bipedal systems and the under-actuation (number of joint exceeds the actuators) presents a formidable computational challenge to achieve real-time control. In this paper, we present a computationally efficient method for one-step deadbeat control and demonstrate it on a 5-link planar bipedal model with 1 degree of under-actuation. Our method uses computed torque control using the 4 actuated degrees of freedom to decouple and reduce the dimensionality of the stance phase dynamics to a single degree of freedom. This simplification ensures that the step-to-step dynamics are a single equation. Then using Monte Carlo sampling, we generate data for approximating the step-to-step dynamics followed by curve fitting using a control affine model and a Gaussian process error model. We use the control affine model to compute control inputs using feedback linearization and fine tune these using iterative learning control using the Gaussian process error enabling one-step deadbeat control. We demonstrate the approach in simulation in scenarios involving stabilization against perturbations, following a changing velocity reference, and precise foot placement. We conclude that computed torque control-based model reduction and sampling-based approximation of the step-to-step dynamics provides a computationally efficient approach for real-time one-step deadbeat control of complex bipedal systems
A probabilistic model for predicting the probability of no-show in hospital appointments
Logistic regression, Beta distribution, Bayesian inference, Healthcare, Scheduling,
An integrated framework for reducing hospital readmissions using risk trajectories characterization and discharge timing optimization
When patients leave the hospital for lower levels of care, they experience a risk of adverse events on a daily basis. The advent of value-based purchasing among other major initiatives has led to an increasing emphasis on reducing the occurrences of these post-discharge adverse events. This has spurred the development of new prediction technologies to identify which patients are at risk for an adverse event as well as actions to mitigate those risks. Those actions include pre-discharge and post-discharge interventions to reduce risk. However, traditional prediction models have been developed to support only post-discharge actions; predicting risk of adverse events at the time of discharge only. In this paper we develop an integrated framework of risk prediction and discharge optimization that supports both types of interventions: discharge timing and post-discharge monitoring. Our method combines a kernel approach for capturing the non-linear relationship between length of stay and risk of an adverse event, with a Principle Component Analysis method that makes the resulting estimation tractable. We then demonstrate how this prediction model could be used to support both types of interventions by developing a simple and easily implementable discharge timing optimization