189 research outputs found

    Theoretical Properties Of Multinomial Logistic Regression

    Get PDF
    A large sample relative efficiency of estimation for multinomial logistic regression compared to multiple group discriminant analysis has been derived and evaluated for parameter values relevant to epidemiological research. The large sample distributions of the two procedures are based on the assumptions of multivariate normality and common covariance structure among groups. Matrix calculus methods were found to be valuable in obtaining concise expressions for the large sample variances.;Relative efficiency does not decrease as the number of response categories increases, although increases tend to be small. The number of explanatory variables and the magnitude of the odds ratios associated with them are the main factors determining relative efficiency, with the correlation among the explanatory variables and the distribution of the response frequencies being secondary factors. Values of odds ratios typical in practice can give relative efficiencies greater than two-thirds for a small number of variables.;An approximation to the large sample distribution of logistic regression has been extended and used to develop methods for sample size estimation in the multinomial case. Matrix calculus was involved in developing a matrix Taylor expansion for the Fisher information matrix. Sample size was approximated by the first term in this expansion. The approximation was evaluated for two particular distributions of a single explanatory variable by comparing it to the precise sample size based on the full expansion. It was found to be inaccurate for more than two response groups except in limited circumstances. However, a correction factor was derived that, when applied, gave sample size estimates of reasonable accuracy for several response groups. We also found that the sample size required to look at the risk of response in one particular category is greater than that for all responses combined when alternative hypotheses are the same

    Multilevel modeling for the analysis of longitudinal blood pressure data in the Framingham Heart Study pedigrees

    Get PDF
    BACKGROUND: The data arising from a longitudinal familial study have a complex correlation structure that cannot be modeled using classical methods for the analysis of familial data at a single time point. METHODS: To fit the longitudinal systolic blood pressure (SBP) pedigree data arising from the Framingham Heart Study, we proposed to use multilevel modeling. That approach was used to distinguish multiple levels of information with individual repeated measurements (Level 1) being made within individuals (Level 2), and individuals clustered within pedigrees (Level 3). Residuals from the subject-specific and pedigree-specific regression models were summed both for the mean SBP and slope of SBP change over time, in order to define two new outcomes that were then used in a genome-wide linkage analysis. RESULTS: Evidence for linkage for the two outcomes (mean SBP and slope) was found in several chromosomal regions with a maximum LOD score of 3.6 on chromosome 8 and 3.5 on chromosome 17 for the mean SBP, and 2.5 on chromosome 1 for SBP slope. However, the linkage on chromosome 8 was only detected when the sample was restricted to subjects between age 25 and 75 and with at least four exams (Cohort 1) or 3 exams (Cohort 2). DISCUSSION: Multilevel modeling is a powerful approach to detect genes involved in complex traits when longitudinal data are available. It allows for complex hierarchical data structure to be taken into account and therefore, a better partitioning of random within-individual variation from other sources of variability (genetic or nongenetic)

    Application of bivariate mixed counting process models to genetic analysis of rheumatoid arthritis severity

    Get PDF
    We sought to i) identify putative genetic determinants of the severity of rheumatoid arthritis in the NARAC (North American Rheumatoid Arthritis Consortium) data, ii) assess whether known candidate genes for disease status are also associated with disease severity in those affected, and iii) determine whether heterogeneity among the severity phenotypes can be explained by genetic and/or host factors. These questions are addressed by developing bivariate mixed-counting process models for numbers of tender and swollen joints to evaluate genetic association of candidate polymorphisms, such as DRB1, and selected single-nucleotide polymorphisms in known candidate genes/regions for rheumatoid arthritis, including PTPN22, and those in the regions identified by a genome-wide linkage scan of disease severity using the dense Illumina single-nucleotide polymorphism panel. The counting process framework provides a flexible approach to account for the duration of rheumatoid arthritis, an attractive feature when modeling severity of a disease. Moreover, we found a gain in efficiency when using a bivariate compared to a univariate counting process model

    Comparison of Haseman-Elston regression analyses using single, summary, and longitudinal measures of systolic blood pressure

    Get PDF
    To compare different strategies for linkage analyses of longitudinal quantitative trait measures, we applied the "revisited" Haseman-Elston (RHE) regression model (the cross product of centered sib-pair trait values is regressed on expected identical-by-descent allele sharing) to cross-sectional, summary, and repeated measurements of systolic blood pressure (SBP) values in replicate 34, randomly selected from the Genetic Analysis Workshop 13 simulated data. RHE linkage scans were performed without knowledge of the generating model using the following phenotypes derived from untreated SBP measurements: the first, the last, the mean, the ratio of the change between the first and last over time, and the estimated linear regression slope coefficient. Estimates of allele sharing in sibling pairs were obtained from the complete genotype data of Cohorts 1 and 2, but linkage analyses were restricted to the five visits of Cohort 2 siblings. Evidence for linkage was suggestive (p < 0.001) at markers neighboring SBP genes Gb35, Gs10, and Gs12, but weaker signals (p < 0.01) were obtained at markers mapping close to Gb34 and Gs11. Linkage to baseline genes Gb34 and Gb35 was best detected using the first SBP measurement, whereas linkage to slope genes Gs10-12 was best detected using the last or mean SBP value. At markers on chromosomes 13 and 21 displaying strongest linkage signals, marginal RHE-type models including repeated SBP measures were fit to test for overall and time-dependent genetic effects. These analyses assumed independent sib pairs and employed generalized estimating equations (GEE) with a first-order autoregressive working correlation structure to adjust for serial correlation present among repeated observations from the same sibling pair

    A Note on the Efficiencies of Sampling Strategies in Two-Stage Bayesian Regional Fine Mapping of a Quantitative Trait

    Get PDF
    ABSTRACT: In focused studies designed to follow up associations detected in a genome-wide association study (GWAS), investigators can proceed to fine-map a genomic region by targeted sequencing or dense genotyping of all variants in the region, aiming to identify a functional sequence variant. For the analysis of a quantitative trait, we consider a Bayesian approach to fine-mapping study design that incorporates stratification according to a promising GWAS tag SNP in the same region. Improved cost-efficiency can be achieved when the fine-mapping phase incorporates a two-stage design, with identification of a smaller set of more promising variants in a subsample taken in stage 1, followed by their evaluation in an independent stage 2 subsample. To avoid the potential negative impact of genetic model misspecification on inference we incorporate genetic model selection based on posterior probabilities for each competing model. Our simulation study shows that, compared to simple random sampling that ignores genetic information from GWAS, tag-SNP-based stratified sample allocation methods reduce the number of variants continuing to stage 2 and are more likely to promote the functional sequence variant into confirmation studies

    ‘You certainly don’t get promoted for just teaching’:experiences of education-focused academics in research-intensive universities

    Get PDF
    Changes in drivers of academic roles within higher education institutions globally have resulted in increased proportions of academics in education focused (EF) posts. International and UK research suggests that EF academics can experience dissatisfaction with career progression and the perceived value of their work, including those in research-intensive universities. Previous UK research was conducted prior to the introduction of the TEF which has altered the landscape. Therefore, it was timely to examine the current experience of EF academics in research-intensive universities through a theoretical lens to understand barriers and facilitators to career progression. This interview-based study used two theoretical frameworks, Feldman and Ng’s Framework for Career Mobility, Embeddedness, and Success and Kanter’s theory of Power within organisations to explore the experiences of 43 EF academics across 12 research-intensive UK universities. Four contract types were identified, some of which allowed promotion. Three broad themes were derived from the data, including (1) Lack of agreement on the definition of education-focused academic roles, (2) Level of value and appreciation of educational expertise and the impact on education-focused academics, (3) Career development opportunities for education-focused academics. Recommendations to further enhance the experience and career progression for EF academics in research-intensive universities further include; ensuring transparency in recruitment into EF posts as to whether career development is possible within that post, the need to continue the sector-wide discussion on the definition of EF roles that recognises the complexity and diversity of activity and continued work to value and recognise appropriately educational expertise.</p

    Using an age-at-onset phenotype with interval censoring to compare methods of segregation and linkage analysis in a candidate region for elevated systolic blood pressure

    Get PDF
    BACKGROUND: Genetic studies of complex disorders such as hypertension often utilize families selected for this outcome, usually with information obtained at a single time point. Since age-at-onset for diagnosed hypertension can vary substantially between individuals, a phenotype based on long-term follow up in unselected families can yield valuable insights into this disorder for the general population. METHODS: Genetic analyses were conducted using 2884 individuals from the largest 330 families of the Framingham Heart Study. A longitudinal phenotype was constructed using the age at an examination when systolic blood pressure (SBP) first exceeds 139 mm Hg. An interval for age-at-onset was created, since the exact time of onset was unknown. Time-fixed (sex, study cohort) and time-varying (body mass index, daily cigarette and alcohol consumption) explanatory variables were included. RESULTS: Segregation analysis for a major gene effect demonstrated that the major gene effect parameter was sensitive to the choice for age-at-onset. Linkage analyses for age-at-onset were conducted using 1537 individuals in 52 families. Evidence for putative genes identified on chromosome 17 in a previous linkage study using a quantitative SBP phenotype for these data was not confirmed. CONCLUSIONS: Interval censoring for age-at-onset should not be ignored. Further research is needed to explain the inconsistent segregation results between the different age-at-onset models (regressive threshold and proportional hazards) as well as the inconsistent linkage results between the longitudinal phenotypes (age-at-onset and quantitative)

    Genome-wide association analyses of North American Rheumatoid Arthritis Consortium and Framingham Heart Study data utilizing genome-wide linkage results

    Get PDF
    The power of genome-wide association studies can be improved by incorporating information from previous study findings, for example, results of genome-wide linkage analyses. Weighted false-discovery rate (FDR) control can incorporate genome-wide linkage scan results into the analysis of genome-wide association data by assigning single-nucleotide polymorphism (SNP) specific weights. Stratified FDR control can also be applied by stratifying the SNPs into high and low linkage strata. We applied these two FDR control methods to the data of North American Rheumatoid Arthritis Consortium (NARAC) study and the Framingham Heart Study (FHS), combining both association and linkage analysis results. For the NARAC study, we used linkage results from a previous genome scan of rheumatoid arthritis (RA) phenotype. For the FHS study, we obtained genome-wide linkage scores from the same 550 k SNP data used for the association analyses of three lipids phenotypes (HDL, LDL, TG). We confirmed some genes previously reported for association with RA and lipid phenotypes. Stratified and weighted FDR methods appear to give improved ranks to some of the replicated SNPs for the RA data, suggesting linkage scan results could provide useful information to improve genome-wide association studies

    Older adults have difficulty in decoding sarcasm

    Get PDF
    This research was funded by the Leverhulme Trust, United Kingdom (F/00152/W). We acknowledge the assistance of Francis Quinn in collecting the data.Peer reviewedPostprin
    • 

    corecore