145 research outputs found
SARS-CoV-2 Transmission and Epidemic Characteristics in Jining City, China
Background: Severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) that causes severe acute respiratory syndrome has spread to hundreds of countries and infected millions of people, causing more than a hundred thousand deaths. This study aimed to describe the epidemic characteristics of SARS-CoV-2 and its transmission in a city in China.Methods: This was a descriptive study on retrospective data collected from January to February 2020 from reports issued by the authority of Jining City, China, including data on travel history, transmission, gender, and age of infected persons. Results: During the period January and February 2020, 52 cases were confirmed to be SARS-CoV-2 infections with more than half were males (n=32, 61.5%) and and in the age grup of 31–50 yars old (53.8%). The modes of transmission were mostly primary infections (n=23) and a history of travel to and from outside of Shandong Province (n=14). Interestingly, the infection was the 4th transmission and most primary infectious persons did not transmit the virus to others.Conclusions: The key characters of infected people in Jining City in early epidemic time with the exception of exogenous inputs are male gender, city dweller, and middle-aged people of 31–50 years old. There is a restricted transmission in Jining City of China at the early phrase of the SARS-CoV-2 epidemic, indicating that the strategy for the fight against SARS-CoV-2 is effective to some extent and worth to be learned by the members of the global village. This strategy includes actions such as home isolation, collective centralized quarantine, social distancing, and face mask use
Identify treatment effect patterns for personalised decisions
In personalised decision making, evidence is required to determine suitable
actions for individuals. Such evidence can be obtained by identifying treatment
effect heterogeneity in different subgroups of the population. In this paper,
we design a new type of pattern, treatment effect pattern to represent and
discover treatment effect heterogeneity from data for determining whether a
treatment will work for an individual or not. Our purpose is to use the
computational power to find the most specific and relevant conditions for
individuals with respect to a treatment or an action to assist with
personalised decision making. Most existing work on identifying treatment
effect heterogeneity takes a top-down or partitioning based approach to search
for subgroups with heterogeneous treatment effects. We propose a bottom-up
generalisation algorithm to obtain the most specific patterns that fit
individual circumstances the best for personalised decision making. For the
generalisation, we follow a consistency driven strategy to maintain inner-group
homogeneity and inter-group heterogeneity of treatment effects. We also employ
graphical causal modelling technique to identify adjustment variables for
reliable treatment effect pattern discovery. Our method can find the treatment
effect patterns reliably as validated by the experiments. The method is faster
than the two existing machine learning methods for heterogeneous treatment
effect identification and it produces subgroups with higher inner-group
treatment effect homogeneity
Making Users Indistinguishable: Attribute-wise Unlearning in Recommender Systems
With the growing privacy concerns in recommender systems, recommendation
unlearning, i.e., forgetting the impact of specific learned targets, is getting
increasing attention. Existing studies predominantly use training data, i.e.,
model inputs, as the unlearning target. However, we find that attackers can
extract private information, i.e., gender, race, and age, from a trained model
even if it has not been explicitly encountered during training. We name this
unseen information as attribute and treat it as the unlearning target. To
protect the sensitive attribute of users, Attribute Unlearning (AU) aims to
degrade attacking performance and make target attributes indistinguishable. In
this paper, we focus on a strict but practical setting of AU, namely
Post-Training Attribute Unlearning (PoT-AU), where unlearning can only be
performed after the training of the recommendation model is completed. To
address the PoT-AU problem in recommender systems, we design a two-component
loss function that consists of i) distinguishability loss: making attribute
labels indistinguishable from attackers, and ii) regularization loss:
preventing drastic changes in the model that result in a negative impact on
recommendation performance. Specifically, we investigate two types of
distinguishability measurements, i.e., user-to-user and
distribution-to-distribution. We use the stochastic gradient descent algorithm
to optimize our proposed loss. Extensive experiments on three real-world
datasets demonstrate the effectiveness of our proposed methods
Post-Training Attribute Unlearning in Recommender Systems
With the growing privacy concerns in recommender systems, recommendation
unlearning is getting increasing attention. Existing studies predominantly use
training data, i.e., model inputs, as unlearning target. However, attackers can
extract private information from the model even if it has not been explicitly
encountered during training. We name this unseen information as
\textit{attribute} and treat it as unlearning target. To protect the sensitive
attribute of users, Attribute Unlearning (AU) aims to make target attributes
indistinguishable. In this paper, we focus on a strict but practical setting of
AU, namely Post-Training Attribute Unlearning (PoT-AU), where unlearning can
only be performed after the training of the recommendation model is completed.
To address the PoT-AU problem in recommender systems, we propose a
two-component loss function. The first component is distinguishability loss,
where we design a distribution-based measurement to make attribute labels
indistinguishable from attackers. We further extend this measurement to handle
multi-class attribute cases with efficient computational overhead. The second
component is regularization loss, where we explore a function-space measurement
that effectively maintains recommendation performance compared to
parameter-space regularization. We use stochastic gradient descent algorithm to
optimize our proposed loss. Extensive experiments on four real-world datasets
demonstrate the effectiveness of our proposed methods.Comment: arXiv admin note: text overlap with arXiv:2310.0584
Developing and evaluating a machine learning based algorithm to predict the need of pediatric intensive care unit transfer for newly hospitalized children
AbstractBackgroundEarly warning scores (EWS) are designed to identify early clinical deterioration by combining physiologic and/or laboratory measures to generate a quantified score. Current EWS leverage only a small fraction of Electronic Health Record (EHR) content. The planned widespread implementation of EHRs brings the promise of abundant data resources for prediction purposes. The three specific aims of our research are: (1) to develop an EHR-based automated algorithm to predict the need for Pediatric Intensive Care Unit (PICU) transfer in the first 24h of admission; (2) to evaluate the performance of the new algorithm on a held-out test data set; and (3) to compare the effectiveness of the new algorithm's with those of two published Pediatric Early Warning Scores (PEWS).MethodsThe cases were comprised of 526 encounters with 24-h Pediatric Intensive Care Unit (PICU) transfer. In addition to the cases, we randomly selected 6772 control encounters from 62516 inpatient admissions that were never transferred to the PICU. We used 29 variables in a logistic regression and compared our algorithm against two published PEWS on a held-out test data set.ResultsThe logistic regression algorithm achieved 0.849 (95% CI 0.753–0.945) sensitivity, 0.859 (95% CI 0.850–0.868) specificity and 0.912 (95% CI 0.905–0.919) area under the curve (AUC) in the test set. Our algorithm's AUC was significantly higher, by 11.8 and 22.6% in the test set, than two published PEWS.ConclusionThe novel algorithm achieved higher sensitivity, specificity, and AUC than the two PEWS reported in the literature
Focal cerebral ischemia in the TNFalpha-transgenic rat
<p>Abstract</p> <p>Background</p> <p>To determine if chronic elevation of the inflammatory cytokine, tumor necrosis factor-α (TNFα), will affect infarct volume or cortical perfusion after focal cerebral ischemia.</p> <p>Methods</p> <p>Transgenic (TNFα-Tg) rats overexpressing the murine TNFα gene in brain were prepared by injection of mouse DNA into rat oocytes. Brain levels of TNFα mRNA and protein were measured and compared between TNFα-Tg and non-transgenic (non-Tg) littermates. Mean infarct volume was calculated 24 hours or 7 days after one hour of reversible middle cerebral artery occlusion (MCAO). Cortical perfusion was monitored by laser-Doppler flowmetry (LDF) during MCAO. Cortical vascular density was quantified by stereology. Post-ischemic cell death was assessed by immunohistochemistry and regional measurement of caspase-3 activity or DNA fragmentation. Unpaired <it>t </it>tests or analysis of variance with post hoc tests were used for comparison of group means.</p> <p>Results</p> <p>In TNFα-Tg rat brain, the aggregate mouse and rat TNFα mRNA level was fourfold higher than in non-Tg littermates and the corresponding TNFα protein level was increased fivefold (p ≤ 0.01). Infarct volume was greater in TNFα-Tg rats than in non-Tg controls at 24 hours (p ≤ 0.05) and 7 days (p ≤ 0.01). Within the first 10 minutes of MCAO, cortical perfusion measured by LDF was reduced in TNFα-Tg rats (p ≤ 0.05). However, regional vascular density was equivalent between TNFα-Tg and non-Tg animals (p = NS). Neural cellular apoptosis was increased in transgenic animals as shown by elevated caspase-3 activity (p ≤ 0.05) and DNA fragmentation (p ≤ 0.001) at 24 hours.</p> <p>Conclusion</p> <p>Chronic elevation of TNFα protein in brain increases susceptibility to ischemic injury but has no effect on vascular density. TNFα-Tg animals are more susceptible to apoptotic cell death after MCAO than are non-Tg animals. We conclude that the TNFα-Tg rat is a valuable new tool for the study of cytokine-mediated ischemic brain injury.</p
Automated detection of medication administration errors in neonatal intensive care
AbstractObjectiveTo improve neonatal patient safety through automated detection of medication administration errors (MAEs) in high alert medications including narcotics, vasoactive medication, intravenous fluids, parenteral nutrition, and insulin using the electronic health record (EHR); to evaluate rates of MAEs in neonatal care; and to compare the performance of computerized algorithms to traditional incident reporting for error detection.MethodsWe developed novel computerized algorithms to identify MAEs within the EHR of all neonatal patients treated in a level four neonatal intensive care unit (NICU) in 2011 and 2012. We evaluated the rates and types of MAEs identified by the automated algorithms and compared their performance to incident reporting. Performance was evaluated by physician chart review.ResultsIn the combined 2011 and 2012 NICU data sets, the automated algorithms identified MAEs at the following rates: fentanyl, 0.4% (4 errors/1005 fentanyl administration records); morphine, 0.3% (11/4009); dobutamine, 0 (0/10); and milrinone, 0.3% (5/1925). We found higher MAE rates for other vasoactive medications including: dopamine, 11.6% (5/43); epinephrine, 10.0% (289/2890); and vasopressin, 12.8% (54/421). Fluid administration error rates were similar: intravenous fluids, 3.2% (273/8567); parenteral nutrition, 3.2% (649/20124); and lipid administration, 1.3% (203/15227). We also found 13 insulin administration errors with a resulting rate of 2.9% (13/456). MAE rates were higher for medications that were adjusted frequently and fluids administered concurrently. The algorithms identified many previously unidentified errors, demonstrating significantly better sensitivity (82% vs. 5%) and precision (70% vs. 50%) than incident reporting for error recognition.ConclusionsAutomated detection of medication administration errors through the EHR is feasible and performs better than currently used incident reporting systems. Automated algorithms may be useful for real-time error identification and mitigation
Direct Conversion of Mouse Astrocytes Into Neural Progenitor Cells and Specific Lineages of Neurons
Background: Cell replacement therapy has been envisioned as a promising treatment for neurodegenerative diseases. Due to the ethical concerns of ESCs-derived neural progenitor cells (NPCs) and tumorigenic potential of iPSCs, reprogramming of somatic cells directly into multipotent NPCs has emerged as a preferred approach for cell transplantation.
Methods: Mouse astrocytes were reprogrammed into NPCs by the overexpression of transcription factors (TFs) Foxg1, Sox2, and Brn2. The generation of subtypes of neurons was directed by the force expression of cell-type specific TFs Lhx8 or Foxa2/Lmx1a.
Results: Astrocyte-derived induced NPCs (AiNPCs) share high similarities, including the expression of NPC-specific genes, DNA methylation patterns, the ability to proliferate and differentiate, with the wild type NPCs. The AiNPCs are committed to the forebrain identity and predominantly differentiated into glutamatergic and GABAergic neuronal subtypes. Interestingly, additional overexpression of TFs Lhx8 and Foxa2/Lmx1a in AiNPCs promoted cholinergic and dopaminergic neuronal differentiation, respectively.
Conclusions: Our studies suggest that astrocytes can be converted into AiNPCs and lineage-committed AiNPCs can acquire differentiation potential of other lineages through forced expression of specific TFs. Understanding the impact of the TF sets on the reprogramming and differentiation into specific lineages of neurons will provide valuable strategies for astrocyte-based cell therapy in neurodegenerative diseases
- …