4,182 research outputs found
Language Model Training Paradigms for Clinical Feature Embeddings
In research areas with scarce data, representation learning plays a
significant role. This work aims to enhance representation learning for
clinical time series by deriving universal embeddings for clinical features,
such as heart rate and blood pressure. We use self-supervised training
paradigms for language models to learn high-quality clinical feature
embeddings, achieving a finer granularity than existing time-step and
patient-level representation learning. We visualize the learnt embeddings via
unsupervised dimension reduction techniques and observe a high degree of
consistency with prior clinical knowledge. We also evaluate the model
performance on the MIMIC-III benchmark and demonstrate the effectiveness of
using clinical feature embeddings. We publish our code online for replication.Comment: Poster at "NeurIPS 2023 Workshop: Self-Supervised Learning - Theory
and Practice
Quantum mechanics/molecular mechanics minimum free-energy path for accurate reaction energetics in solution and enzymes: Sequential sampling and optimization on the potential of mean force surface
To accurately determine the reaction path and its energetics for enzymatic and solution-phase reactions, we present a sequential sampling and optimization approach that greatly enhances the efficiency of the ab initio quantum mechanics/molecular mechanics minimum free-energy path (QM/MM-MFEP) method. In the QM/MM-MFEP method, the thermodynamics of a complex reaction system is described by the potential of mean force (PMF) surface of the quantum mechanical (QM) subsystem with a small number of degrees of freedom, somewhat like describing a reaction process in the gas phase. The main computational cost of the QM/MM-MFEP method comes from the statistical sampling of conformations of the molecular mechanical (MM) subsystem required for the calculation of the QM PMF and its gradient. In our new sequential sampling and optimization approach, we aim to reduce the amount of MM sampling while still retaining the accuracy of the results by first carrying out MM phase-space sampling and then optimizing the QM subsystem in the fixed-size ensemble of MM conformations. The resulting QM optimized structures are then used to obtain more accurate sampling of the MM subsystem. This process of sequential MM sampling and QM optimization is iterated until convergence. The use of a fixed-size, finite MM conformational ensemble enables the precise evaluation of the QM potential of mean force and its gradient within the ensemble, thus circumventing the challenges associated with statistical averaging and significantly speeding up the convergence of the optimization process. To further improve the accuracy of the QM/MM-MFEP method, the reaction path potential method developed by Lu and Yang [Z. Lu and W. Yang, J. Chem. Phys. 121, 89 (2004)] is employed to describe the QM/MM electrostatic interactions in an approximate yet accurate way with a computational cost that is comparable to classical MM simulations. The new method was successfully applied to two example reaction processes, the classical SN 2 reaction of Cl- + CH3 Cl in solution and the second proton transfer step of the reaction catalyzed by the enzyme 4-oxalocrotonate tautomerase. The activation free energies calculated with this new sequential sampling and optimization approach to the QM/MM-MFEP method agree well with results from other simulation approaches such as the umbrella sampling technique with direct QM/MM dynamics sampling, demonstrating the accuracy of the iterative QM/MM-MFEP method. © 2008 American Institute of Physics.published_or_final_versio
Recommended from our members
Debugging Woven Code
The ability to debug woven programs is critical to the adoption of Aspect Oriented Programming (AOP). Nevertheless, many AOP systems lack adequate support for debugging, making it difficult to diagnose faults and understand the program's structure and control flow. We discuss why debugging aspect behavior is hard and how harvesting results from related research on debugging optimized code can make the problem more tractable. We also specify general debugging criteria that we feel all AOP systems should support. We present a novel solution to the problem of debugging aspect-enabled programs. Our Wicca system is the first dynamic AOP system to support full source-level debugging of woven code. It introduces a new weaving strategy that combines source weaving with online byte-code patching. Changes to the aspect rules, or base or aspect source code are rewoven and recompiled on-the-fly. We present the results of an experiment that show how these features provide the programmer with a powerful interactive debugging experience with relatively little overhead
Analysis of the Brinkman-Forchheimer equations with slip boundary conditions
In this work, we study the Brinkman-Forchheimer equations driven under slip
boundary conditions of friction type. We prove the existence and uniqueness of
weak solutions by means of regularization combined with the Faedo-Galerkin
approach. Next we discuss the continuity of the solution with respect to
Brinkman's and Forchheimer's coefficients. Finally, we show that the weak
solution of the corresponding stationary problem is stable
Optical Microscopy and Atomic Force Microscopy Imaging of 2,4,6-Trinitrotoluene Droplets and Clusters on Mica
Optical and atomic force microscopy (AFM) were used to image 2,4,6-trinitrotoluene (TNT) on a cleaved mica (001) surface. The vapor deposition of TNT resulted in ellipsoidal drop formation on the mica surface. The growth rate of the drop diameter was found to be linear with vapor dosing time while the drop density followed a 1/r2 dependence, where r is the length of the major axis of the ellipsoid, for increasing dosing times. TNT platelets surrounded by a region depleted of drops were observed after 8 hours of dosing. The depleted region is attributed to a 10% shrinkage for liquid-solid transition for TNT and also from the enthalpy of fusion which causes the vaporization of small drops and clusters of TNT. Residues of TNT located in the depleted regions were characterized by AFM lift-off forces and were attributed to different morphologies of TNT that nucleated at different sites on the mica surface or dinitro- and trinitro-benzene derivatives which are common impurities in 2,4,6-trinitrotoluene
Interaction of Stress, Lead Burden, and Age on Cognition in Older Men: The VA Normative Aging Study
BACKGROUND. Low-level exposure to lead and to chronic stress may independently influence cognition. However, the modifying potential of psychosocial stress on the neurotoxicity of lead and their combined relationship to aging-associated decline have not been fully examined. OBJECTIVES. We examined the cross-sectional interaction between stress and lead exposure on Mini-Mental State Examination (MMSE) scores among 811 participants in the Normative Aging Study, a cohort of older U.S. men. METHODS. We used two self-reported measures of stress appraisal-a self-report of stress related to their most severe problem and the Perceived Stress Scale (PSS). Indices of lead exposure were blood lead and bone (tibia and patella) lead. RESULTS. Participants with higher self-reported stress had lower MMSE scores, which were adjusted for age, education, computer experience, English as a first language, smoking, and alcohol intake. In multivariable-adjusted tests for interaction, those with higher PSS scores had a 0.57-point lower (95% confidence interval, -0.90 to 0.24) MMSE score for a 2-fold increase in blood lead than did those with lower PSS scores. In addition, the combination of high PSS scores and high blood lead categories on one or both was associated with a 0.05-0.08 reduction on the MMSE for each year of age compared with those with low PSS score and blood lead level (p < 0.05). CONCLUSIONS. Psychological stress had an independent inverse association with cognition and also modified the relationship between lead exposure and cognitive performance among older men. Furthermore, high stress and lead together modified the association between age and cognition.National Institutes of Health (R01ES07821, R01HL080674, R01HL080674-02S1, R01ES013744, ES05257-06A1, P20MD000501, P42ES05947, ES03918-02); National Center for Research Resources General Clinical Research Center (M01RR02635); Leaves of Grass Foundation; United States Department of Veterans Affair
Best practice for analysis of shared clinical trial data
BACKGROUND:
Greater transparency, including sharing of patient-level data for further research, is an increasingly important topic for organisations who sponsor, fund and conduct clinical trials. This is a major paradigm shift with the aim of maximising the value of patient-level data from clinical trials for the benefit of future patients and society. We consider the analysis of shared clinical trial data in three broad categories: (1) reanalysis - further investigation of the efficacy and safety of the randomized
3 intervention, (2) meta-analysis, and (3) supplemental analysis for a research question that is not directly assessing the randomized intervention.
DISCUSSION:
In order to support appropriate interpretation and limit the risk of misleading findings, analysis of shared clinical trial data should have a pre-specified analysis plan. However, it is not generally possible to limit bias and control multiplicity to the extent that is possible in the original trial design, conduct and analysis, and this should be acknowledged and taken into account when interpreting results. We highlight a number of areas where specific considerations arise in planning, conducting, interpreting and reporting analyses of shared clinical trial data. A key issue is that that these analyses essentially share many of the limitations of any post hoc analyses beyond the original specified analyses. The use of individual patient data in meta-analysis can provide increased precision and reduce bias. Supplemental analyses are subject to many of the same issues that arise in broader epidemiological analyses. Specific discussion topics are addressed within each of these areas.
SUMMARY:
Increased provision of patient-level data from industry and academic-led clinical trials for secondary research can benefit future patients and society. Responsible data sharing, including transparency of the research objectives, analysis plans and of the results will support appropriate interpretation and help to address the risk of misleading results and avoid unfounded health scares
Determinants of Bone and Blood Lead Levels among Minorities Living in the Boston Area
We measured blood and bone lead levels among minority individuals who live in some of Boston’s neighborhoods with high minority representation. Compared with samples of predominantly white subjects we had studied before, the 84 volunteers in this study (33:67 male:female ratio; 31–72 years of age) had similar educational, occupational, and smoking profiles and mean blood, tibia, and patella lead levels (3 μg/dL, 11.9 μg/g, and 14.2 μg/g, respectively) that were also similar. The slopes of the univariate regressions of blood, tibia, and patella lead versus age were 0.10 μg/dL/year (p < 0.001), 0.45 μg/g/year (p < 0.001), and 0.73 μg/g/year (p < 0.001), respectively. Analyses of smoothing curves and regression lines for tibia and patella lead suggested an inflection point at 55 years of age, with slopes for subjects ≥ 55 years of age that were not only steeper than those of younger subjects but also substantially steeper than those observed for individuals > 55 years of age in studies of predominantly white participants. This apparent racial disparity at older ages may be related to differences in historic occupational and/or environmental exposures, or possibly the lower rates of bone turnover that are known to occur in postmenopausal black women. The higher levels of lead accumulation seen in this age group are of concern because such levels have been shown in other studies to predict elevated risks of chronic disease such as hypertension and cognitive dysfunction. Additional research on bone lead levels in minorities and their socioeconomic and racial determinants is needed
Iron Metabolism Genes, Low-Level Lead Exposure, and QT Interval
Background: Cumulative exposure to lead has been shown to be associated with depression of electrocardiographic conduction, such as QT interval (time from start of the Q wave to end of the T wave). Because iron can enhance the oxidative effects of lead, we examined whether polymorphisms in iron metabolism genes [hemochromatosis (), transferrin () C2, and heme oxygenase-1 ()] increase susceptibility to the effects of lead on QT interval in 613 community-dwelling older men. Methods: We used standard 12-lead electrocardiograms, K-shell X-ray fluorescence, and graphite furnace atomic absorption spectrometry to measure QT interval, bone lead, and blood lead levels, respectively. Results: A one-interquartile-range increase in tibia lead level (13 μg/g) was associated with a 11.35-msec [95% confidence interval (CI), 4.05–18.65 msec] and a 6.81-msec (95% CI, 1.67–11.95 msec) increase in the heart-rate–corrected QT interval among persons carrying long alleles and at least one copy of an variant, respectively, but had no effect in persons with short and middle alleles and the wild-type HFE genotype. The lengthening of the heart-rate–corrected QT interval with higher tibia lead and blood lead became more pronounced as the total number (0 vs. 1 vs. ≥2) of gene variants increased (tibia, -trend = 0.01; blood, -trend = 0.04). This synergy seems to be driven by a joint effect between variant and L alleles. Conclusion: We found evidence that gene variants related to iron metabolism increase the impacts of low-level lead exposure on the prolonged QT interval. This is the first such report, so these results should be interpreted cautiously and need to be independently verified
- …