2,375 research outputs found

    Causes of prehospital misinterpretations of ST elevation myocardial infarction

    Get PDF
    Objectives: To determine the causes of software misinterpretation of ST elevation myocardial infarction (STEMI) compared to clinically identified STEMI to identify opportunities to improve prehospital STEMI identification. Methods: We compared ECGs acquired from July 2011 through June 2012 using the LIFEPAK 15 on adult patients transported by the Los Angeles Fire Department. Cases included patients ≥18 years who received a prehospital ECG. Software interpretation of the ECG (STEMI or not) was compared with data in the regional EMS registry to classify the interpretation as true positive (TP), true negative (TN), false positive (FP), or false negative (FN). For cases where classification was not possible using registry data, 3 blinded cardiologists interpreted the ECG. Each discordance was subsequently reviewed to determine the likely cause of misclassification. The cardiologists independently reviewed a sample of these discordant ECGs and the causes of misclassification were updated in an iterative fashion. Results: Of 44,611 cases, 50% were male (median age 65; inter-quartile range 52–80). Cases were classified as 482 (1.1%) TP, 711 (1.6%) FP, 43371 (97.2%) TN, and 47 (0.11%) FN. Of the 711 classified as FP, 126 (18%) were considered appropriate for, though did not undergo, emergent coronary angiography, because the ECG showed definite (52 cases) or borderline (65 cases) ischemic ST elevation, a STEMI equivalent (5 cases) or ST-elevation due to vasospasm (4 cases). The sensitivity was 92.8% [95% CI 90.6, 94.7%] and the specificity 98.7% [95% CI 98.6, 98.8%]. The leading causes of FP were ECG artifact (20%), early repolarization (16%), probable pericarditis/myocarditis (13%), indeterminate (12%), left ventricular hypertrophy (8%), and right bundle branch block (5%). There were 18 additional reasons for FP interpretation (<4% each). The leading causes of FN were borderline ST-segment elevations less than the algorithm threshold (40%) and tall T waves reducing the ST/T ratio below threshold (15%). There were 11 additional reasons for FN interpretation occurring ≤3 times each. Conclusion: The leading causes of FP automated interpretation of STEMI were ECG artifact and non-ischemic causes of ST-segment elevation. FN were rare and were related to ST-segment elevation or ST/T ratio that did not meet the software algorithm threshold

    Mining Bodily Cues to Deception

    Get PDF
    A significant body of research has investigated potential correlates of deception and bodily behavior. The vast majority of these studies consider discrete, subjectively coded bodily movements such as specific hand or head gestures. Such studies fail to consider quantitative aspects of body movement such as the precise movement direction, magnitude and timing. In this paper, we employ an innovative data mining approach to systematically study bodily correlates of deception. We re-analyze motion capture data from a previously published deception study, and experiment with different data coding options. We report how deception detection rates are affected by variables such as body part, the coding of the pose and movement, the length of the observation, and the amount of measurement noise. Our results demonstrate the feasibility of a data mining approach, with detection rates above 65%, significantly outperforming human judgement (52.80%). Owing to the systematic analysis, our analyses allow for an understanding of the importance of various coding factor. Moreover, we can reconcile seemingly discrepant findings in previous research. Our approach highlights the merits of data-driven research to support the validation and development of deception theory.</p

    Non-Traditional Agriculture: Path to Future Food Production?

    Get PDF
    The world population is growing rapidly, and the amount of arable land is decreasing. This raises the issue of how to feed the 2050 projected population of nine billion people. Another issue is the presence of “food deserts.” Food deserts are defined as urban neighborhoods and rural towns without ready access to fresh, healthy, and affordable food. The purpose of this report is to examine possible alternatives for food production that are also located in close proximity to demand. Included in non-traditional production agriculture are several concepts currently in use, including greenhouses (covered agriculture), backyard gardens (called Victory Gardens during WW II), agriculture-designated land within urban areas, underground facilities (bomb shelters), urban agriculture in housing developments, and converted warehouses. Other production concepts are presented to demonstrate the breadth of discussion regarding meeting future food demand. In the several cases of unique, non-traditional agriculture, each is a new industry with few players in the market, suggesting time will be the final decision-maker on viability (agronomic and economic). Greenhouses have been a part of production agriculture for centuries, with the technology well-defined. But retrofitting abandoned warehouses or constructing high rise facilities, as well as production in high rise housing units, will take time to perfect the systems involved, including a water and reuse system, fertilization, pest and disease control, harvesting, overall quality control, and logistics. The necessary components of non-traditional food production facilities are resources, land, water, equipment and finances. Highly fertile land has for the most part been allocated to crop cultivation, and the quantity of high-quality water for irrigation is declining. Non-traditional systems typically have a much smaller land footprint and are highly efficient in water use. The implication is that non-traditional food production systems will provide society with more quantity, plus improved quality, of high-value food products per unit of land and per unit of water. This is a broad, brief review of actual production facilities, as well as projections for the future. Included are greenhouses, retrofitted warehouses, below-surface facilities, high rise facilities, and production near cities. This piece is intended to provide insight into the broad range of non-traditional food production facilities emerging and envisioned at this time. The mention of any business is not to be interpreted as endorsement or suggestion that it is viable

    Benefit-Cost Analysis of FEMA Hazard Mitigation Grants

    Get PDF
    Mitigation ameliorates the impact of natural hazards on communities by reducing loss of life and injury, property and environmental damage, and social and economic disruption. The potential to reduce these losses brings many benefits, but every mitigation activity has a cost that must be considered in our world of limited resources. In principle benefit-cost analysis (BCA) can be used to assess a mitigation activity’s expected net benefits (discounted future benefits less discounted costs), but in practice this often proves difficult. This paper reports on a study that refined BCA methodologies and applied them to a national statistical sample of FEMA mitigation activities over a ten-year period for earthquake, flood, and wind hazards. The results indicate that the overall benefit-cost ratio for FEMA mitigation grants is about 4 to 1, though the ratio varies according to hazard and mitigation type.

    Reconciling gene expression data with regulatory network models

    Get PDF
    The reconstruction of genome-scale metabolic models from genome annotations has become a routine practice in Systems Biology research. The potential of metabolic models for predictive biology is widely accepted by the scientific community, but these same models still lack the capability to account for the effect of gene regulation on metabolic activity. Our focus organism, Bacillus subtilis is most commonly found in soil, being subject to a wide variety of external environmental conditions. This reinforces the importance of the regulatory mechanisms that allow the bacteria to survive and adapt to such conditions. We introduce a manually curated regulatory network for Bacillus subtilis, tapping into the notable resources for B. subtilis regulation. We propose the concept of Atomic Regulon, as a set of genes that share the same ON and OFF gene expression profile across multiple samples of experimental data. Atomic regulon inference uses prior knowledge from curated SEED subsystems, in addition to expression data to infer regulatory interactions. We show how atomic regulons for B. subtilis are able to capture many sets of genes corresponding to regulated operons in our manually curated network. Additionally, we demonstrate how atomic regulons can be used to help expand/ validate the knowledge of the regulatory networks and gain insights into novel biology

    Reconciling gene expression data with regulatory network models – a stimulon-based approach for integrated metabolic and regulatory modeling of Bacillus subtilis

    Get PDF
    The reconstruction of genome-scale metabolic models from genome annotations has become a routine practice in Systems Biology research. The potential of metabolic models for predictive biology is widely accepted by the scientific community, but these same models still lack the capability to account for the effect of gene regulation on metabolic activity. Our focus organism, Bacillus subtilis is most commonly found in soil, being subject to a wide variety of external environmental conditions. This reinforces the importance of the regulatory mechanisms that allow the bacteria to survive and adapt to such conditions. Existing integrated metabolic regulatory models are currently available for only a small number of well-known organisms (e.g E. coli and B. subtilis). The E. coli integrated model was proposed by Covert et al in 2004 and has slowly improved over the years. Goelzer et al. introduced the B. subtilis integrated model in 2008, covering only the central metabolic pathways. Different strategies were used in the two modeling efforts. The E. coli model is defined by a set of Boolean rules (turning genes ON and OFF) accounting mostly for transcription factors, gene interactions, involved metabolites, and some external conditions such as heat shock. The B. subtilis model introduces a set of more complex rules and also incorporates sigma factor activity into the modeling abstraction. Here we propose a genome-scale model for the regulatory network of B. subtilis, using a new stimulon-based approach. A stimulon is defined as the set of genes (that can be a part of the same operon(s) and regulon(s)) that respond in the same set of stimuli. The proposed stimulon-based approach allows for the inclusion of more types of regulation in the model. This methodology also abstracts away much of the complexity of regulatory mechanisms by directly connecting the activity of genes to the presence or absence of associated stimuli, a necessity in the many cases where details of regulatory mechanisms are poorly understood. Our model integrates regulatory network data from the Goelzer et al model, in addition to other available literature data. We then reconciled our model against a large set of high-quality gene expression data (tiled microarrays for 104 different conditions). The stimulons in our model were split or extended to improve consistency with our expression data, and the stimuli in our model were adjusted to improve consistency with the conditions of our expression experiments. The reconciliation with gene expression data revealed a significant number of exact or nearly exact matches between the manually curated regulons/stimulons and pure correlation-based regulons. Our reconciliation analysis of the 2011 SubtiWiki regulon release suggested many gene candidates for regulon extension that were subsequently included in the 2013 SubtiWiki update. Our enhanced model also includes an improved coverage of a wide range of different stress conditions. We then integrated our regulatory model with the latest metabolic reconstruction for B. subtilis, the iBsu1103V2 model (Tanaka et al. 2012). We applied this integrated metabolic regulatory model to the simulation of all growth phenotype data currently available for B. subtilis, demonstrating how the addition of regulatory constraints improved consistency of model predictions with experimentally observed phenotype data. This analysis of growth phenotype data unveiled phenotypes that could only be characterized with the addition of regulatory network constraints. All tools applied in the reconstruction, simulation, and curation of our new regulatory model are now publicly available as a part of the KBase framework. These tools permit the direct simulation of gene expression data using the regulon model alone, as well as the simulation of phenotypes and growth conditions using an integrated metabolic and regulatory model. We will highlight these new tools in the context of our reconstruction and analysis of the B. subtilis regulatory model

    Gene set analyses for interpreting microarray experiments on prokaryotic organisms

    Get PDF
    Background Despite the widespread usage of DNA microarrays, questions remain about how best to interpret the wealth of gene-by-gene transcriptional levels that they measure. Recently, methods have been proposed which use biologically defined sets of genes in interpretation, instead of examining results gene-by-gene. Despite a serious limitation, a method based on Fisher\u27s exact test remains one of the few plausible options for gene set analysis when an experiment has few replicates, as is typically the case for prokaryotes. Results We extend five methods of gene set analysis from use on experiments with multiple replicates, for use on experiments with few replicates. We then use simulated and real data to compare these methods with each other and with the Fisher\u27s exact test (FET) method. As a result of the simulation we find that a method named MAXMEAN-NR, maintains the nominal rate of false positive findings (type I error rate) while offering good statistical power and robustness to a variety of gene set distributions for set sizes of at least 10. Other methods (ABSSUM-NR or SUM-NR) are shown to be powerful for set sizes less than 10. Analysis of three sets of experimental data shows similar results. Furthermore, the MAXMEAN-NR method is shown to be able to detect biologically relevant sets as significant, when other methods (including FET) cannot. We also find that the popular GSEA-NR method performs poorly when compared to MAXMEAN-NR. Conclusion MAXMEAN-NR is a method of gene set analysis for experiments with few replicates, as is common for prokaryotes. Results of simulation and real data analysis suggest that the MAXMEAN-NR method offers increased robustness and biological relevance of findings as compared to FET and other methods, while maintaining the nominal type I error rate

    Effects of traumatic brain injury on cognitive functioning and cerebral metabolites in HIV-infected individuals.

    Get PDF
    We explored the possible augmenting effect of traumatic brain injury (TBI) history on HIV (human immunodeficiency virus) associated neurocognitive complications. HIV-infected participants with self-reported history of definite TBI were compared to HIV patients without TBI history. Groups were equated for relevant demographic and HIV-associated characteristics. The TBI group evidenced significantly greater deficits in executive functioning and working memory. N-acetylaspartate, a putative marker of neuronal integrity, was significantly lower in the frontal gray matter and basal ganglia brain regions of the TBI group. Together, these results suggest an additional brain impact of TBI over that from HIV alone. One clinical implication is that HIV patients with TBI history may need to be monitored more closely for increased risk of HIV-associated neurocognitive disorder signs or symptoms
    corecore