628 research outputs found

    Regression analysis with categorized regression calibrated exposure: some interesting findings

    Get PDF
    BACKGROUND: Regression calibration as a method for handling measurement error is becoming increasingly well-known and used in epidemiologic research. However, the standard version of the method is not appropriate for exposure analyzed on a categorical (e.g. quintile) scale, an approach commonly used in epidemiologic studies. A tempting solution could then be to use the predicted continuous exposure obtained through the regression calibration method and treat it as an approximation to the true exposure, that is, include the categorized calibrated exposure in the main regression analysis. METHODS: We use semi-analytical calculations and simulations to evaluate the performance of the proposed approach compared to the naive approach of not correcting for measurement error, in situations where analyses are performed on quintile scale and when incorporating the original scale into the categorical variables, respectively. We also present analyses of real data, containing measures of folate intake and depression, from the Norwegian Women and Cancer study (NOWAC). RESULTS: In cases where extra information is available through replicated measurements and not validation data, regression calibration does not maintain important qualities of the true exposure distribution, thus estimates of variance and percentiles can be severely biased. We show that the outlined approach maintains much, in some cases all, of the misclassification found in the observed exposure. For that reason, regression analysis with the corrected variable included on a categorical scale is still biased. In some cases the corrected estimates are analytically equal to those obtained by the naive approach. Regression calibration is however vastly superior to the naive method when applying the medians of each category in the analysis. CONCLUSION: Regression calibration in its most well-known form is not appropriate for measurement error correction when the exposure is analyzed on a percentile scale. Relating back to the original scale of the exposure solves the problem. The conclusion regards all regression models

    GENETIC AND REPRODUCTIVE TECHNOLOGIES IN THE LIGHT OF RELIGIOUS DIALOGUE

    Full text link
    Since the gene splicing debates of the 1980s, the public has been exposed to an ongoing sequence of genetic and reproductive technologies. Many issue areas have outcomes that lose track of people's inner values or engender opposing religious viewpoints defying final resolution. This essay relocates the discussion of what is an acceptable application from the individual to the societal level, examining technologies that stand to address large numbers of people and thus call for policy resolution, rather than individual fiat, in their application. A major source of guidance is the “Genetic Frontiers” series of professional dialogues and conferences held by the National Conference for Community and Justice from 2002 to 2004. Genetic testing, human gene therapy, genetic engineering of plants and animals, and stem cell technology are examined. While differences in perspective on the beginning of life persist, a stepwise approach to the examination of genetic testing reveals areas of general agreement. Stewardship of life, human co-creativity with the divine, and social justice help define the bounds of application of genetic engineering and therapy; compassionate care plays a major role in establishing stem cell policy. Active, sustained dialogue is a useful resource for enabling sharing of religious values and crystallization of policies.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/73549/1/j.1467-9744.2006.00813.x.pd

    Glycogen Synthase Kinase (GSK) 3β phosphorylates and protects nuclear myosin 1c from proteasome-mediated degradation to activate rDNA transcription in early G1 cells

    Get PDF
    Nuclear myosin 1c (NM1) mediates RNA polymerase I (pol I) transcription activation and cell cycle progression by facilitating PCAF-mediated H3K9 acetylation, but the molecular mechanism by which NM1 is regulated remains unclear. Here, we report that at early G1 the glycogen synthase kinase (GSK) 3β phosphorylates and stabilizes NM1, allowing for NM1 association with the chromatin. Genomic analysis by ChIP-Seq showed that this mechanism occurs on the rDNA as active GSK3β selectively occupies the gene. ChIP assays and transmission electron microscopy in GSK3β-/- mouse embryonic fibroblasts indicated that at G1 rRNA synthesis is suppressed due to decreased H3K9 acetylation leading to a chromatin state incompatible with transcription. We found that GSK3β directly phosphorylates the endogenous NM1 on a single serine residue (Ser-1020) located within the NM1 C-terminus. In G1 this phosphorylation event stabilizes NM1 and prevents NM1 polyubiquitination by the E3 ligase UBR5 and proteasome-mediated degradation. We conclude that GSK3β-mediated phosphorylation of NM1 is required for pol I transcription activation

    Gabapentin for complex regional pain syndrome in Machado-Joseph disease: a case report

    Get PDF
    <p>Abstract</p> <p>Introduction</p> <p>Chronic pain is a common problem for patients with Machado-Joseph disease. Most of the chronic pain in Machado-Joseph disease has been reported to be of musculoskeletal origin, but now there seems to be different chronic pain in patients with Machado-Joseph disease.</p> <p>Case presentation</p> <p>A 29-year-old man (Han Chinese, Hoklo) with Machado-Joseph disease experienced severe chronic pain in both feet, cutaneous thermal change, thermal hypersensitivity, focal edema, and sweating and had a history of bone fracture. These symptoms were compatible with a diagnosis of complex regional pain syndrome. After common analgesics failed to relieve his pain, gabapentin was added and titrated to 2000 mg/day (500 mg every six hours) in less than two weeks. This relieved 40% of his pain and led to significant clinical improvement.</p> <p>Conclusions</p> <p>The pathophysiology of complex regional pain syndrome includes peripheral and central sensitizations, the latter of which might be associated with the neurodegeneration in Machado-Joseph disease. In this report, we suggest that gabapentin could inhibit central sensitization as an adjunct for complex regional pain syndrome in patients with Machado-Joseph disease.</p

    Dose optimisation in paediatric radiography - using regression models to investigate the relative impact of acquisition factors on image quality and radiation dose

    Get PDF
    Objective: To investigate the optimum pelvis X-ray acquisition factors for a 10-year-old child. Secondly, to evaluate the impact of each acquisition factor on image quality (IQ) and radiation dose. Method: Images were acquired using a pelvis phantom and a range of acquisition parameters; e.g. tube potential, additional filtration and source-to-image distance (SID). Automatic exposure control (AEC) was used with two orientations (head towards/away from two outer chambers) and three different chamber selections. Visual IQ was evaluated using relative and absolute-VGA methods. Radiation doses were measured by placing a dosimeter on the anterior surface of the phantom. Regression analysis was used to determine optimum parameters. Results: The optimised technique (178.8 µGy), with diagnostic IQ, was with 89kVp, 130 cm SID and with 1 mm Al + 0.1 mm Cu filtration. This technique was with the head towards the two outer AEC chambers. Regression analysis showed that SID had the lowest impact on IQ (β = 0.002 95% CI −0.001 to 0.005) and dose (β = −0.96 95% CI −0.40 to −1.53). The impact of filtration on dose (β = −76.24 95% CI −86.76 to −85.72) was higher than tube potential (β = −13.44 95% CI −14.34 to −12.53). The following impact ratios were higher on IQ than radiation dose: filtration/kVp; 11.28 times, filtration/SID; 7.01 times and kVp/SID; 0.62 times. Conclusion: Optimised parameters were identified as 89 kVp, 130 cm SID and with 1 mm Al + 0.1 mm Cu additional filtration. Regression analysis demonstrated that filtration and tube potential had the greatest effect on radiation dose and IQ, respectively

    Effect of high parity on occurrence of anemia in pregnancy: a cohort study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Studies that explore the controversial association between parity and anaemia-in-pregnancy (AIP) were often hampered by not distinguishing incident cases caused by pregnancy from prevalent cases complicated by pregnancy. The authors' aim in conducting this study was to overcome this methodological concern.</p> <p>Methods</p> <p>A retrospective cohort study was conducted in Oman on 1939 pregnancies among 479 parous female participants with available pregnancy records in a community trial. We collected information from participants, the community trial, and health records of each pregnancy. Throughout the follow-up period, we enumerated 684 AIP cases of which 289 (42.2%) were incident cases. High parity (HP, ≥ 5 pregnancies) accounted for 48.7% of total pregnancies. Two sets of regression analyses were conducted: the first restricted to incident cases only, and the second inclusive of all cases. The relation with parity as a dichotomy and as multiple categories was examined for each set; multi-level logistic regression (MLLR) was employed to produce adjusted models.</p> <p>Results</p> <p>In the fully adjusted MLLR models that were restricted to incident cases, women with HP pregnancies had a higher risk of AIP compared to those who had had fewer pregnancies (Risk Ratio, RR = 2.92; 95% CI 2.02, 4.59); the AIP risk increased in a dose-response fashion over multiple categories of parity. In the fully adjusted MLLR models that included all cases, the association disappeared (RR = 1.11; 95% CI 0.91, 1.18) and the dose-response pattern flattened.</p> <p>Conclusions</p> <p>This study shows the importance of specifying which cases of AIP are incident and provides supportive evidence for a causal relation between parity and occurrence of incidental AIP.</p

    Standardizing effect size from linear regression models with log-transformed variables for meta-analysis

    Get PDF
    Background: Meta-analysis is very useful to summarize the effect of a treatment or a risk factor for a given disease. Often studies report results based on log-transformed variables in order to achieve the principal assumptions of a linear regression model. If this is the case for some, but not all studies, the effects need to be homogenized. Methods: We derived a set of formulae to transform absolute changes into relative ones, and vice versa, to allow including all results in a meta-analysis. We applied our procedure to all possible combinations of log-transformed independent or dependent variables. We also evaluated it in a simulation based on two variables either normally or asymmetrically distributed. Results: In all the scenarios, and based on different change criteria, the effect size estimated by the derived set of formulae was equivalent to the real effect size. To avoid biased estimates of the effect, this procedure should be used with caution in the case of independent variables with asymmetric distributions that significantly differ from the normal distribution. We illustrate an application of this procedure by an application to a meta-analysis on the potential effects on neurodevelopment in children exposed to arsenic and manganese. Conclusions: The procedure proposed has been shown to be valid and capable of expressing the effect size of a linear regression model based on different change criteria in the variables. Homogenizing the results from different studies beforehand allows them to be combined in a meta-analysis, independently of whether the transformations had been performed on the dependent and/or independent variables

    Measurement and simulation of the neutron response of the Nordball liquid scintillator array

    Full text link
    The response of the liquid scintillator array Nordball to neutrons in the energy range 1.5 < T_n < 10 MeV has been measured by time of flight using a 252Cf fission source. Fission fragments were detected by means of a thin-film plastic scintillator. The measured differential and integral neutron detection efficiencies agree well with predictions of a Monte Carlo simulation of the detector which models geometry accurately and incorporates the measured, non-linear proton light output as a function of energy. The ability of the model to provide systematic corrections to photoneutron cross sections, measured by Nordball at low energy, is tested in a measurement of the two-body deuteron photodisintegration cross section in the range E_gamma=14-18 MeV. After correction the present 2H(gamma,n)p measurements agree well with a published evaluation of the large body of 2H(gamma,p)n data.Comment: 20 pages 10 figures, submitted Nucl. Instr. Meth.

    Self-consistent Green's function method for nuclei and nuclear matter

    Get PDF
    Recent results obtained by applying the method of self-consistent Green's functions to nuclei and nuclear matter are reviewed. Particular attention is given to the description of experimental data obtained from the (e,e'p) and (e,e'2N) reactions that determine one and two-nucleon removal probabilities in nuclei since the corresponding amplitudes are directly related to the imaginary parts of the single-particle and two-particle propagators. For this reason and the fact that these amplitudes can now be calculated with the inclusion of all the relevant physical processes, it is useful to explore the efficacy of the method of self-consistent Green's functions in describing these experimental data. Results for both finite nuclei and nuclear matter are discussed with particular emphasis on clarifying the role of short-range correlations in determining various experimental quantities. The important role of long-range correlations in determining the structure of low-energy correlations is also documented. For a complete understanding of nuclear phenomena it is therefore essential to include both types of physical correlations. We demonstrate that recent experimental results for these reactions combined with the reported theoretical calculations yield a very clear understanding of the properties of {\em all} protons in the nucleus. We propose that this knowledge of the properties of constituent fermions in a correlated many-body system is a unique feature of nuclear physics.Comment: 110 pages, accepted for publication on Prog. Part. Nucl. Phy
    corecore