5 research outputs found
Mathematical models of drug delivery via a contact lens during wear
In this work we develop and investigate mathematical and computational models
that describe drug delivery from a contact lens during wear. Our models are
designed to predict the dynamics of drug release from the contact lens and
subsequent transport into the adjacent pre-lens tear film and post-lens tear
film as well as into the ocular tissue (e.g. cornea), into the eyelid, and out
of these regions. These processes are modeled by one dimensional diffusion out
of the lens coupled to compartment-type models for drug concentrations in the
various accompanying regions. In addition to numerical solutions that are
compared with experimental data on drug release in an in vitro eye model, we
also identify a large diffusion limit model for which analytical solutions can
be written down for all quantities of interest, such as cumulative release of
the drug from the contact lens. We use our models to make assessments about
possible mechanisms and drug transport pathways through the pre-lens and
post-lens tear films and provide interpretation of experimental observations.
We discuss successes and limitations of our models as well as their potential
to guide further research to help understand the dynamics of ophthalmic drug
delivery via drug-eluting contact lenses.Comment: 44 pages, 20 figures, 4 table
Mechanistic determination of tear film thinning via fitting simplified models to tear breakup
Purpose: To determine whether evaporation, tangential flow, or a combination
of the two cause tear film breakup in a variety of instances; to estimate
related breakup parameters that cannot be measured in breakup during subject
trials; and to validate our procedure against previous work. Methods: Five
ordinary differential equation models for tear film thinning were designed that
model evaporation, osmosis, and various types of flow. Eight tear film breakup
instances of five healthy subjects that were identified in fluorescence images
in previous work were fit with these five models. The fitting procedure used a
nonlinear least squares optimization that minimized the difference of the
computed theoretical fluorescent intensity from the models and the experimental
fluorescent intensity from the images. The optimization was conducted over the
evaporation rate and up to three flow rate parameters. The smallest norm of the
difference was determined to correspond to the model that best explained the
tear film dynamics. Results: All of the breakup instances were best fit by
models with time-dependent flow. Our optimal parameter values and thinning rate
and fluid flow profiles compare well with previous partial differential
equation model results in most instances. Conclusion: Our fitting procedure
suggests that the combination of the Marangoni effect and evaporation cause
most of the breakup instances. Comparison with results from previous work
suggests that the simplified models can capture the essential tear film
dynamics in most cases, thereby validating this procedure as one that could be
used on many other instances.Comment: 28 pages, 11 figures, 6 table
Fitting ODE models of tear film breakup
Several elements are developed to quantitatively determine the contribution
of different physical and chemical effects to tear breakup (TBU) in normal
subjects. Fluorescence (FL) imaging is employed to visualize the tear film and
to determine tear film (TF) thinning and potential TBU. An automated system
using a convolutional neural network was trained and deployed to identify
multiple TBU instances in each trial. Once identified, extracted FL intensity
data was fit by mathematical models that included tangential flow along the
eye, evaporation, osmosis and FL intensity of emission from the tear film.
Optimizing the fit of the models to the FL intensity data determined the
mechanism(s) driving each instance of TBU and produced an estimate of the
osmolarity within TBU. Initial estimates for FL concentration and initial TF
thickness agree well with prior results. Fits were produced for
instances of potential TBU from 15 normal subjects. The results showed a
distribution of causes of TBU in these normal subjects, as reflected by
estimated flow and evaporation rates, which appear to agree well with
previously published data. Final osmolarity depended strongly on the TBU
mechanism, generally increasing with evaporation rate but complicated by the
dependence on flow. The method has the potential to classify TBU instances
based on the mechanism and dynamics and to estimate the final osmolarity at the
TBU locus. The results suggest that it might be possible to classify individual
subjects and provide a baseline for comparison and potential classification of
dry eye disease subjects
Identifying Long COVID Definitions, Predictors, and Risk Factors in the United States: A Scoping Review of Data Sources Utilizing Electronic Health Records
This scoping review explores the potential of electronic health records (EHR)-based studies to characterize long COVID. We screened all peer-reviewed publications in the English language from PubMed/MEDLINE, Scopus, and Web of Science databases until 14 September 2023, to identify the studies that defined or characterized long COVID based on data sources that utilized EHR in the United States, regardless of study design. We identified only 17 articles meeting the inclusion criteria. Respiratory conditions were consistently significant in all studies, followed by poor well-being features (n = 14, 82%) and cardiovascular conditions (n = 12, 71%). Some articles (n = 7, 41%) used a long COVID-specific marker to define the study population, relying mainly on ICD-10 codes and clinical visits for post-COVID-19 conditions. Among studies exploring plausible long COVID (n = 10, 59%), the most common methods were RT-PCR and antigen tests. The time delay for EHR data extraction post-test varied, ranging from four weeks to more than three months; however, most studies considering plausible long COVID used a waiting period of 28 to 31 days. Our findings suggest a limited utilization of EHR-derived data sources in defining long COVID, with only 59% of these studies incorporating a validation step
Modeling in higher dimensions to improve diagnostic testing accuracy: Theory and examples for multiplex saliva-based SARS-CoV-2 antibody assays.
The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic has emphasized the importance and challenges of correctly interpreting antibody test results. Identification of positive and negative samples requires a classification strategy with low error rates, which is hard to achieve when the corresponding measurement values overlap. Additional uncertainty arises when classification schemes fail to account for complicated structure in data. We address these problems through a mathematical framework that combines high dimensional data modeling and optimal decision theory. Specifically, we show that appropriately increasing the dimension of data better separates positive and negative populations and reveals nuanced structure that can be described in terms of mathematical models. We combine these models with optimal decision theory to yield a classification scheme that better separates positive and negative samples relative to traditional methods such as confidence intervals (CIs) and receiver operating characteristics. We validate the usefulness of this approach in the context of a multiplex salivary SARS-CoV-2 immunoglobulin G assay dataset. This example illustrates how our analysis: (i) improves the assay accuracy, (e.g. lowers classification errors by up to 42% compared to CI methods); (ii) reduces the number of indeterminate samples when an inconclusive class is permissible, (e.g. by 40% compared to the original analysis of the example multiplex dataset) and (iii) decreases the number of antigens needed to classify samples. Our work showcases the power of mathematical modeling in diagnostic classification and highlights a method that can be adopted broadly in public health and clinical settings