74 research outputs found
Implementing a Global Termination Condition and Collecting Output Measures in Parallel Simulation
This paper investigates how to implement arbitrary global termination conditions and collect statistics in a parallel simulation. The problem is first discussed using Chandy and Sherman's space-time framework. Then termination conditions are categorized, and termination algorithms are given for several categories. The chief problem is that if one evaluates the termination condition asynchronously with respect to the simulation, when termination is detected the simulator has already modified old attribute values needed to compute output measures. The major conclusion is that minor modification of time warp permits use any termination condition. In contrast, conservative protocols permit limitied termination conditions unless they are modified to incorporate mechanisms present in optimistic protocols
UNITY Algorithms for Detecting Stable and Non-Stable Termination Conditions in Time Warp Parallel Simulations
This paper extends work done by Abrams and Richardson on the topic of implementing global termination conditions and collecting output measures in parallel simulation. Concentrating on the Time Warp method for parallel simulation, an improved categorization scheme for termination conditions is presented, as well as algorithms written in UNITY notation to implement each category
Simulated Anthrax Attacks and Syndromic Surveillance
Bioterrorism surveillance systems can be assessed using modeling to simulate real-world attacks
Clinically Actionable Hypercholesterolemia and Hypertriglyceridemia in Children with Nonalcoholic Fatty Liver Disease
OBJECTIVE:
To determine the percentage of children with nonalcoholic fatty liver disease (NAFLD) in whom intervention for low-density lipoprotein cholesterol or triglycerides was indicated based on National Heart, Lung, and Blood Institute guidelines.
STUDY DESIGN:
This multicenter, longitudinal cohort study included children with NAFLD enrolled in the National Institute of Diabetes and Digestive and Kidney Diseases Nonalcoholic Steatohepatitis Clinical Research Network. Fasting lipid profiles were obtained at diagnosis. Standardized dietary recommendations were provided. After 1 year, lipid profiles were repeated and interpreted according to National Heart, Lung, and Blood Institute Expert Panel on Integrated Guidelines for Cardiovascular Health and Risk Reduction. Main outcomes were meeting criteria for clinically actionable dyslipidemia at baseline, and either achieving lipid goal at follow-up or meeting criteria for ongoing intervention.
RESULTS:
There were 585 participants, with a mean age of 12.8 years. The prevalence of children warranting intervention for low-density lipoprotein cholesterol at baseline was 14%. After 1 year of recommended dietary changes, 51% achieved goal low-density lipoprotein cholesterol, 27% qualified for enhanced dietary and lifestyle modifications, and 22% met criteria for pharmacologic intervention. Elevated triglycerides were more prevalent, with 51% meeting criteria for intervention. At 1 year, 25% achieved goal triglycerides with diet and lifestyle changes, 38% met criteria for advanced dietary modifications, and 37% qualified for antihyperlipidemic medications.
CONCLUSIONS:
More than one-half of children with NAFLD met intervention thresholds for dyslipidemia. Based on the burden of clinically relevant dyslipidemia, lipid screening in children with NAFLD is warranted. Clinicians caring for children with NAFLD should be familiar with lipid management
Inferring Parametric Energy Consumption Functions at Different Software Levels:ISA vs. LLVM IR
The static estimation of the energy consumed by program
executions is an important challenge, which has applications in program
optimization and verification, and is instrumental in energy-aware software
development. Our objective is to estimate such energy consumption
in the form of functions on the input data sizes of programs. We have developed
a tool for experimentation with static analysis which infers such
energy functions at two levels, the instruction set architecture (ISA) and
the intermediate code (LLVM IR) levels, and re
ects it upwards to the
higher source code level. This required the development of a translation
from LLVM IR to an intermediate representation and its integration
with existing components, a translation from ISA to the same representation,
a resource analyzer, an ISA-level energy model, and a mapping
from this model to LLVM IR. The approach has been applied to programs
written in the XC language running on XCore architectures, but
is general enough to be applied to other languages. Experimental results
show that our LLVM IR level analysis is reasonably accurate (less than
6:4% average error vs. hardware measurements) and more powerful than
analysis at the ISA level. This paper provides insights into the trade-off
of precision versus analyzability at these levels
Recommended from our members
Electronic Health Record Based Algorithm to Identify Patients with Autism Spectrum Disorder
Objective: Cohort selection is challenging for large-scale electronic health record (EHR) analyses, as International Classification of Diseases 9th edition (ICD-9) diagnostic codes are notoriously unreliable disease predictors. Our objective was to develop, evaluate, and validate an automated algorithm for determining an Autism Spectrum Disorder (ASD) patient cohort from EHR. We demonstrate its utility via the largest investigation to date of the co-occurrence patterns of medical comorbidities in ASD. Methods: We extracted ICD-9 codes and concepts derived from the clinical notes. A gold standard patient set was labeled by clinicians at Boston Children’s Hospital (BCH) (N = 150) and Cincinnati Children’s Hospital and Medical Center (CCHMC) (N = 152). Two algorithms were created: (1) rule-based implementing the ASD criteria from Diagnostic and Statistical Manual of Mental Diseases 4th edition, (2) predictive classifier. The positive predictive values (PPV) achieved by these algorithms were compared to an ICD-9 code baseline. We clustered the patients based on grouped ICD-9 codes and evaluated subgroups. Results: The rule-based algorithm produced the best PPV: (a) BCH: 0.885 vs. 0.273 (baseline); (b) CCHMC: 0.840 vs. 0.645 (baseline); (c) combined: 0.864 vs. 0.460 (baseline). A validation at Children’s Hospital of Philadelphia yielded 0.848 (PPV). Clustering analyses of comorbidities on the three-site large cohort (N = 20,658 ASD patients) identified psychiatric, developmental, and seizure disorder clusters. Conclusions: In a large cross-institutional cohort, co-occurrence patterns of comorbidities in ASDs provide further hypothetical evidence for distinct courses in ASD. The proposed automated algorithms for cohort selection open avenues for other large-scale EHR studies and individualized treatment of ASD
- …