503 research outputs found

    Prediction Accuracy of SNP Epistasis Models Generated by Multifactor Dimensionality Reduction and Stepwise Penalized Logistic Regression

    Get PDF
    Conventional statistical modeling techniques, used to detect high-order interactions between SNPs, lead to issues with high-dimensionality due to the number of interactions which need to be evaluated using sparse data. Statisticians have developed novel methods Multifactor Dimensionality Reduction (MDR), Generalized Multifactor Dimensionality Reduction (GMDR), and stepwise Penalized Logistic Regression (stepPLR) to analyze SNP epistasis associated with the development of or outcomes for genetic disease. Due to inconsistencies in published results regarding the performance of these three methods, this thesis used data from the very large GenIMS study to compare the prediction accuracies of 90-day mortality in SNP epistasis models. Comparisons were made using prediction accuracy, sensitivity, specificity, model consistency, chi-square tests, sign tests, and biological plausibility. Testing accuracies were generally higher for GMDR compared to MDR, and stepPLR yielded substandard performance since the models predicted that all subjects were alive at ninety days. Stepwise PLR, however, determined that IL-1A SNPs IL1A_M889, rs1894399, rs1878319, and rs2856837 were each significant predictors of 90-day mortality when adjusting for the other SNPs in the model. In addition, the model included a borderline significant, second-order interaction between rs28556838 and rs3783520 associated with 90-day mortality in a cohort of patients hospitalized with community-acquired pneumonia (CAP). The public health importance of this thesis is that the relative risk for CAP may be higher for a set of SNPs across different genes. The ability to predict which patients will experience a poor outcome may lead to more effective prevention strategies or treatments at earlier stages. Furthermore, identification of significant SNP interactions can also expand the scientific knowledge about biological mechanisms affecting disease outcomes. Altogether, the GMDR method yielded higher prediction accuracies than MDR, and MDR performed better than stepPLR when establishing SNP epistasis models associated with 90-day mortality in the GenIMS cohort

    Struggling While Managing Chronic Illness

    Get PDF
    Although research documenting the struggling response to chronic illness would assist nurses in understanding their patients and potentially in the assessment and support of struggling patients, such research is only in the infancy stage. The purpose of this research study was to address the rarity of literature describing and defining the concept of families struggling while managing chronic illness. Using Strauss and Corbin\u27s paradigm model and grounded theory methodology, the researcher analyzed interviews with nine rural families managing chronic illness. The analysis revealed that families managing chronic illness struggled with everyday living, to obtain a diagnosis, with spiritual beliefs, and with cognitive and existential thoughts, encompassing mind, body and spirit struggles. Struggling occurred within and between individuals and groups. A thought process, more specifically, an awareness, interpretation, deciphering of meaning, or perception was a strong component of the struggling experience. The core phenomenon identified was struggling, which was preceded by the causal conditions perceiving uncertainty and/or vulnerability and ascribing negative meaning to illness management. Struggling occurred within the context of managing chronic illness. Intervening conditions for struggling were ineffective adapting and adapting. Action/ interaction strategies for struggling were denying, emphasizing loss, fostering independence, strengthening relationships, and turning to faith. Consequences of the action/interaction strategies were stagnating and reintegrating. In light of this study\u27s findings, struggling while managing chronic illness is defined as the perception of a difficult process (e.g., a battle, conflict, strenuous effort, or task) while managing chronic illness. The perception of great difficulty is often preceded by perception of vulnerability or uncertainty and/or ascribing negative meaning to chronic illness management. The difficult process can occur within the body, mind, or spirit of a person or group of persons. The understanding of struggling as a perception makes it relatable to other literature exploring perceptions, representations, and ascribed meanings of not only illness experiences, but also other experiences, such as pain and treatments. Nurses can help those managing chronic illness identify its associated perceptions and representations, which in some cases is struggling

    Attenuation correction for TOF-PET with a limited number of stationary coincidence line-sources

    Get PDF
    INTRODUCTION Accurate attenuation correction remains a major issue in combined PET/MRI. We have previously presented a method to derive the attenuation map by performing a transmission scan using an annulus-shaped source placed close to the edge of the FOV of the scanner. With this method, simultaneous transmission and emission data acquisition is possible as transmission data can be extracted using Time-of-Flight (TOF) information. As this method is strongly influenced by photon scatter and dead time effects, its performance depends on the accuracy of the correction techniques for these effects. In this work we present a new approach in which the annulus source is replaced with a limited number of line-sources positioned at 35 cm from the center of the FOV. By including the location of the line sources into the algorithm, the extraction of true transmission data can be improved. The setup was validated with simulations studies and evaluated with a phantom study acquired on the LaBr3-based TOF-PET scanner installed at UPENN. MATERIALS AND METHODS First we performed GATE simulations using the digital NCAT phantom. The phantom was segmented into bone, lung and soft-tissue and injected with 6.5 Mbq/kg 18F-FDG. Simultaneous transmission/emission scans of 3 minutes were simulated using 6, 12 and 24 18F-FDG line sources with a total activity of 0.5 mCi. To obtain the attenuation map, the transmission data is first extracted using TOF information. To reduce misclassification of prompt emission data as transmission data, only events on LORs, which pass within a radial distance of 1 cm from at least one line source, are accepted. The attenuation map is then reconstructed using an iterative gradient descent approach. As a proof of concept, the method was evaluated on the LaBr3-based TOF PET scanner using an anthropomorphic torso phantom injected with 2mCi of 18F-FDG. 24 line-sources of 20μCi each were fixed to a wooden template at the back of the scanner. Simultaneous transmission/emission scans were acquired using 24 line sources. RESULTS Simulation results demonstrate that the fraction of scattered emission events classified as transmission data was reduced from 4.32% with the annulus source to 2.29%, 1.25% and 0.63% for the 24, 12 and 6 line sources respectively. The fraction of misclassified true emission events was reduced from 1.10% to 0.42%, 0.24% and 0.13% respectively. Only in case of 6 line sources, the attenuation maps showed severe artifacts. Compared to the classification solely based on TOF-information, preliminary experimental results indicate an improvement in the accuracy of the attenuation coefficients of 10.44%, 0.12% and 5.09% for soft-tissue, lung and bone tissue respectively. CONCLUSION The proposed method can be used for attenuation correction in sequential or simultaneous TOF-PET/MRI systems. The PET transmission and emission data are acquired simultaneously so no acquisition time for attenuation correction is lost in PET or MRI. Attenuation maps with higher accuracy can be obtained by including information about the location of the line-sources. However, at least 12 line sources are needed to avoid severe artifacts

    Characterization of the Hippocampal Acetylcholine System in a Rodent Model of Fetal Alcohol Syndrome

    Get PDF
    Fetal alcohol spectrum disorders (FASD) are a major public health concern, as it is estimated that 2-5% of children are exposed to alcohol at some point during prenatal development. FASD have been shown to cause damage to multiple brain regions, but research shows that the hippocampus is especially sensitive to alcohol exposure. This damage to the hippocampus explains, in part, deficits in learning and memory that are hallmark symptoms of FASD. The acetylcholine neurotransmitter system plays a major role in learning and memory, and the hippocampus is one of its main targets. This experiment used a rodent model of Fetal Alcohol Syndrome to examine neurochemical and behavioral changes as a result of developmental alcohol exposure, with a focus on the hippocampal acetylcholine system. Alcohol (3.0 g/kg) was administered via intragastric intubation to developing rat pups (PD 2-10). There were three treatment groups: ethanol-exposed, intubated control, and non-treated control. In Experiment 1, in vivo microdialysis was used to measure acetylcholine release in adolescents (PD 32 and 34). During microdialysis, the effects of a high K+/Ca2+ aCSF solution (PD 32) and the effects of an acute galantamine (2.0 mg/kg; PD 34) injection on acetylcholine release were measured. Experiment 3 tested whether chronic administration of galantamine (2.0 mg/kg; PD 11-30), an acetylcholinesterase inhibitor, could attenuate alcohol-induced learning deficits in the context pre-exposure facilitation effect (CPFE; PD 30-32). Experiment 2 utilized brain tissue from Experiments 1 and 3 to measure the impact of developmental alcohol exposure and galantamine treatment on expression of choline acetyltransferase (ChAT; medial septum), vesicular acetylcholine transporter (vAChT; ventral CA1) and the α7 nicotinic acetylcholine receptor (α7 nAChR; ventral CA1). We found that alcohol-exposed animals did not differ in acetylcholine release at baseline or following administration of a high K+/Ca2+ aCSF solution. However, alcohol exposure during development significantly enhanced acetylcholine content following an acute injection of galantamine. Neither chronic galantamine nor alcohol exposure influenced performance in the CPFE task. Finally, the average number of ChAT+ cells was increased in alcohol-exposed animals that displayed the context-shock association (Pre), but not in any of the animals that were in the control task which entailed no learning. Neither alcohol exposure, nor learning, significantly altered the density of vAChT or α7 nAChRs in the ventral CA1 region of the hippocampus. Taken together, these results indicate that the hippocampal acetylcholine system is significantly disrupted under conditions of pharmacological manipulations (e.g. galantamine) in alcohol exposed animals. Furthermore, ChAT was up-regulated in alcohol-exposed animals that learned to associate the context and shock, which may account for their ability to perform this task. Developmental alcohol exposure may disrupt learning and memory in adolescence via a cholinergic mechanism

    Biased efficacy estimates in phase-III dengue vaccine trials due to heterogeneous exposure and differential detectability of primary infections across trial arms.

    Get PDF
    Vaccine efficacy (VE) estimates are crucial for assessing the suitability of dengue vaccine candidates for public health implementation, but efficacy trials are subject to a known bias to estimate VE toward the null if heterogeneous exposure is not accounted for in the analysis of trial data. In light of many well-characterized sources of heterogeneity in dengue virus (DENV) transmission, our goal was to estimate the potential magnitude of this bias in VE estimates for a hypothetical dengue vaccine. To ensure that we realistically modeled heterogeneous exposure, we simulated city-wide DENV transmission and vaccine trial protocols using an agent-based model calibrated with entomological and epidemiological data from long-term field studies in Iquitos, Peru. By simulating a vaccine with a true VE of 0.8 in 1,000 replicate trials each designed to attain 90% power, we found that conventional methods underestimated VE by as much as 21% due to heterogeneous exposure. Accounting for the number of exposures in the vaccine and placebo arms eliminated this bias completely, and the more realistic option of including a frailty term to model exposure as a random effect reduced this bias partially. We also discovered a distinct bias in VE estimates away from the null due to lower detectability of primary DENV infections among seronegative individuals in the vaccinated group. This difference in detectability resulted from our assumption that primary infections in vaccinees who are seronegative at baseline resemble secondary infections, which experience a shorter window of detectable viremia due to a quicker immune response. This resulted in an artefactual finding that VE estimates for the seronegative group were approximately 1% greater than for the seropositive group. Simulation models of vaccine trials that account for these factors can be used to anticipate the extent of bias in field trials and to aid in their interpretation

    Changing Trends in the Undergraduate Fraternity/Sorority Experience: An Evaluative and Analytical Literature Review

    Get PDF
    Fraternal organizations in American institutions of higher education have a significant influence on student life and campus culture. Historically, research has shown that fraternities and sororities provide environments that support negative and often illegal activities that can be detrimental to individuals and communities at large. However, recent research has identified new trends that suggest this may be changing. This article identifies these trends and implications

    The value of monitoring wildlife roadkill

    Get PDF
    The number of wildlife-vehicle collisions has an obvious value in estimating the direct effects of roads on wildlife, i.e. mortality due to vehicle collisions. Given the nature of the data—species identification and location—there is, however, much wider ecological knowledge that can be gained by monitoring wildlife roadkill. Here, we review the added value and opportunities provided by these data, through a series of case studies where such data have been instrumental in contributing to the advancement of knowledge in species distributions, population dynamics, and animal behaviour, as well as informing us about health of the species and of the environment. We propose that consistently, systematically, and extensively monitoring roadkill facilitates five critical areas of ecological study: (1) monitoring of roadkill numbers, (2) monitoring of population trends, (3) mapping of native and invasive species distributions, (4) animal behaviour, and (5) monitoring of contaminants and disease. The collection of such data also offers a valuable opportunity for members of the public to be directly involved in scientific data collection and research (citizen science). Through continuing to monitor wildlife roadkill, we can expand our knowledge across a wide range of ecological research areas, as well as facilitating investigations that aim to reduce both the direct and indirect effects of roads on wildlife populations

    Hypertension: Development of a prediction model to adjust self-reported hypertension prevalence at the community level

    Full text link
    Abstract Background Accurate estimates of hypertension prevalence are critical for assessment of population health and for planning and implementing prevention and health care programs. While self-reported data is often more economically feasible and readily available compared to clinically measured HBP, these reports may underestimate clinical prevalence to varying degrees. Understanding the accuracy of self-reported data and developing prediction models that correct for underreporting of hypertension in self-reported data can be critical tools in the development of more accurate population level estimates, and in planning population-based interventions to reduce the risk of, or more effectively treat, hypertension. This study examines the accuracy of self-reported survey data in describing prevalence of clinically measured hypertension in two racially and ethnically diverse urban samples, and evaluates a mechanism to correct self-reported data in order to more accurately reflect clinical hypertension prevalence. Methods We analyze data from the Detroit Healthy Environments Partnership (HEP) Survey conducted in 2002 and the National Health and Nutrition Examination (NHANES) 2001–2002 restricted to urban areas and participants 25 years and older. We re-calibrate measures of agreement within the HEP sample drawing upon parameter estimates derived from the NHANES urban sample, and assess the quality of the adjustment proposed within the HEP sample. Results Both self-reported and clinically assessed prevalence of hypertension were higher in the HEP sample (29.7 and 40.1, respectively) compared to the NHANES urban sample (25.7 and 33.8, respectively). In both urban samples, self-reported and clinically assessed prevalence is higher than that reported in the full NHANES sample in the same year (22.9 and 30.4, respectively). Sensitivity, specificity and accuracy between clinical and self-reported hypertension prevalence were ‘moderate to good’ within the HEP sample and ‘good to excellent’ within the NHANES sample. Agreement between clinical and self-reported hypertension prevalence was ‘moderate to good’ within the HEP sample (kappa =0.65; 95% CI = 0.63-0.67), and ‘good to excellent’ within the NHANES sample (kappa = 0.75; 95%CI = 0.73-0.80). Application of a ‘correction’ rule based on prediction models for clinical hypertension using the national sample (NHANES) allowed us to re-calibrate sensitivity and specificity estimates for the HEP sample. The adjusted estimates of hypertension in the HEP sample based on two different correction models, 38.1% and 40.5%, were much closer to the observed hypertension prevalence of 40.1%. Conclusions Application of a simple prediction model derived from national NHANES data to self-reported data from the HEP (Detroit based) sample resulted in estimates that more closely approximated clinically measured hypertension prevalence in this urban community. Similar correction models may be useful in obtaining more accurate estimates of hypertension prevalence in other studies that rely on self-reported hypertension.http://deepblue.lib.umich.edu/bitstream/2027.42/112834/1/12913_2011_Article_2187.pd

    Estimating the impact of city-wide Aedes aegypti population control: An observational study in Iquitos, Peru.

    Get PDF
    During the last 50 years, the geographic range of the mosquito Aedes aegypti has increased dramatically, in parallel with a sharp increase in the disease burden from the viruses it transmits, including Zika, chikungunya, and dengue. There is a growing consensus that vector control is essential to prevent Aedes-borne diseases, even as effective vaccines become available. What remains unclear is how effective vector control is across broad operational scales because the data and the analytical tools necessary to isolate the effect of vector-oriented interventions have not been available. We developed a statistical framework to model Ae. aegypti abundance over space and time and applied it to explore the impact of citywide vector control conducted by the Ministry of Health (MoH) in Iquitos, Peru, over a 12-year period. Citywide interventions involved multiple rounds of intradomicile insecticide space spray over large portions of urban Iquitos (up to 40% of all residences) in response to dengue outbreaks. Our model captured significant levels of spatial, temporal, and spatio-temporal variation in Ae. aegypti abundance within and between years and across the city. We estimated the shape of the relationship between the coverage of neighborhood-level vector control and reductions in female Ae. aegypti abundance; i.e., the dose-response curve. The dose-response curve, with its associated uncertainties, can be used to gauge the necessary spraying effort required to achieve a desired effect and is a critical tool currently absent from vector control programs. We found that with complete neighborhood coverage MoH intra-domicile space spray would decrease Ae. aegypti abundance on average by 67% in the treated neighborhood. Our framework can be directly translated to other interventions in other locations with geolocated mosquito abundance data. Results from our analysis can be used to inform future vector-control applications in Ae. aegypti endemic areas globally
    • …
    corecore