758 research outputs found

    Long-term patterns of body mass and stature evolution within the hominin lineage.

    Get PDF
    Body size is a central determinant of a species' biology and adaptive strategy, but the number of reliable estimates of hominin body mass and stature have been insufficient to determine long-term patterns and subtle interactions in these size components within our lineage. Here, we analyse 254 body mass and 204 stature estimates from a total of 311 hominin specimens dating from 4.4 Ma to the Holocene using multi-level chronological and taxonomic analytical categories. The results demonstrate complex temporal patterns of body size variation with phases of relative stasis intermitted by periods of rapid increases. The observed trajectories could result from punctuated increases at speciation events, but also differential proliferation of large-bodied taxa or the extinction of small-bodied populations. Combined taxonomic and temporal analyses show that in relation to australopithecines, early Homo is characterized by significantly larger average body mass and stature but retains considerable diversity, including small body sizes. Within later Homo, stature and body mass evolution follow different trajectories: average modern stature is maintained from ca 1.6 Ma, while consistently higher body masses are not established until the Middle Pleistocene at ca 0.5-0.4 Ma, likely caused by directional selection related to colonizing higher latitudes. Selection against small-bodied individuals (less than 40 kg; less than 140 cm) after 1.4 Ma is associated with a decrease in relative size variability in later Homo species compared with earlier Homo and australopithecines. The isolated small-bodied individuals of Homo naledi (ca 0.3 Ma) and Homo floresiensis (ca 100-60 ka) constitute important exceptions to these general patterns, adding further layers of complexity to the evolution of body size within the genus Homo. At the end of the Late Pleistocene and Holocene, body size in Homo sapiens declines on average, but also extends to lower limits not seen in comparable frequency since early Homo

    Clinical course, costs and predictive factors for response to treatment in carpal tunnel syndrome: The PALMS study protocol

    Get PDF
    Background Carpal tunnel syndrome (CTS) is the most common neuropathy of the upper limb and a significant contributor to hand functional impairment and disability. Effective treatment options include conservative and surgical interventions, however it is not possible at present to predict the outcome of treatment. The primary aim of this study is to identify which baseline clinical factors predict a good outcome from conservative treatment (by injection) or surgery in patients diagnosed with carpal tunnel syndrome. Secondary aims are to describe the clinical course and progression of CTS, and to describe and predict the UK cost of CTS to the individual, National Health Service (NHS) and society over a two year period. Methods/Design In this prospective observational cohort study patients presenting with clinical signs and symptoms typical of CTS and in whom the diagnosis is confirmed by nerve conduction studies are invited to participate. Data on putative predictive factors are collected at baseline and follow-up through patient questionnaires and include standardised measures of symptom severity, hand function, psychological and physical health, comorbidity and quality of life. Resource use and cost over the 2 year period such as prescribed medications, NHS and private healthcare contacts are also collected through patient self-report at 6, 12, 18 and 24 months. The primary outcome used to classify treatment success or failures will be a 5-point global assessment of change. Secondary outcomes include changes in clinical symptoms, functioning, psychological health, quality of life and resource use. A multivariable model of factors which predict outcome and cost will be developed. Discussion This prospective cohort study will provide important data on the clinical course and UK costs of CTS over a two-year period and begin to identify predictive factors for treatment success from conservative and surgical interventions

    Interacting Supernovae: Types IIn and Ibn

    Full text link
    Supernovae (SNe) that show evidence of strong shock interaction between their ejecta and pre-existing, slower circumstellar material (CSM) constitute an interesting, diverse, and still poorly understood category of explosive transients. The chief reason that they are extremely interesting is because they tell us that in a subset of stellar deaths, the progenitor star may become wildly unstable in the years, decades, or centuries before explosion. This is something that has not been included in standard stellar evolution models, but may significantly change the end product and yield of that evolution, and complicates our attempts to map SNe to their progenitors. Another reason they are interesting is because CSM interaction is an efficient engine for making bright transients, allowing super-luminous transients to arise from normal SN explosion energies, and allowing transients of normal SN luminosities to arise from sub-energetic explosions or low radioactivity yield. CSM interaction shrouds the fast ejecta in bright shock emission, obscuring our normal view of the underlying explosion, and the radiation hydrodynamics of the interaction is challenging to model. The CSM interaction may also be highly non-spherical, perhaps linked to binary interaction in the progenitor system. In some cases, these complications make it difficult to definitively tell the difference between a core-collapse or thermonuclear explosion, or to discern between a non-terminal eruption, failed SN, or weak SN. Efforts to uncover the physical parameters of individual events and connections to possible progenitor stars make this a rapidly evolving topic that continues to challenge paradigms of stellar evolution.Comment: Final draft of a chapter in the "SN Handbook". Accepted. 25 pages, 3 fig

    Markedly Divergent Tree Assemblage Responses to Tropical Forest Loss and Fragmentation across a Strong Seasonality Gradient

    Get PDF
    We examine the effects of forest fragmentation on the structure and composition of tree assemblages within three seasonal and aseasonal forest types of southern Brazil, including evergreen, Araucaria, and deciduous forests. We sampled three southernmost Atlantic Forest landscapes, including the largest continuous forest protected areas within each forest type. Tree assemblages in each forest type were sampled within 10 plots of 0.1 ha in both continuous forests and 10 adjacent forest fragments. All trees within each plot were assigned to trait categories describing their regeneration strategy, vertical stratification, seed-dispersal mode, seed size, and wood density. We detected differences among both forest types and landscape contexts in terms of overall tree species richness, and the density and species richness of different functional groups in terms of regeneration strategy, seed dispersal mode and woody density. Overall, evergreen forest fragments exhibited the largest deviations from continuous forest plots in assemblage structure. Evergreen, Araucaria and deciduous forests diverge in the functional composition of tree floras, particularly in relation to regeneration strategy and stress tolerance. By supporting a more diversified light-demanding and stress-tolerant flora with reduced richness and abundance of shade-tolerant, old-growth species, both deciduous and Araucaria forest tree assemblages are more intrinsically resilient to contemporary human-disturbances, including fragmentation-induced edge effects, in terms of species erosion and functional shifts. We suggest that these intrinsic differences in the direction and magnitude of responses to changes in landscape structure between forest types should guide a wide range of conservation strategies in restoring fragmented tropical forest landscapes worldwide

    Does publication bias inflate the apparent efficacy of psychological treatment for major depressive disorder? A systematic review and meta-analysis of US national institutes of health-funded trials

    Get PDF
    Background The efficacy of antidepressant medication has been shown empirically to be overestimated due to publication bias, but this has only been inferred statistically with regard to psychological treatment for depression. We assessed directly the extent of study publication bias in trials examining the efficacy of psychological treatment for depression. Methods and Findings We identified US National Institutes of Health grants awarded to fund randomized clinical trials comparing psychological treatment to control conditions or other treatments in patients diagnosed with major depressive disorder for the period 1972–2008, and we determined whether those grants led to publications. For studies that were not published, data were requested from investigators and included in the meta-analyses. Thirteen (23.6%) of the 55 funded grants that began trials did not result in publications, and two others never started. Among comparisons to control conditions, adding unpublished studies (Hedges’ g = 0.20; CI95% -0.11~0.51; k = 6) to published studies (g = 0.52; 0.37~0.68; k = 20) reduced the psychotherapy effect size point estimate (g = 0.39; 0.08~0.70) by 25%. Moreover, these findings may overestimate the "true" effect of psychological treatment for depression as outcome reporting bias could not be examined quantitatively. Conclusion The efficacy of psychological interventions for depression has been overestimated in the published literature, just as it has been for pharmacotherapy. Both are efficacious but not to the extent that the published literature would suggest. Funding agencies and journals should archive both original protocols and raw data from treatment trials to allow the detection and correction of outcome reporting bias. Clinicians, guidelines developers, and decision makers should be aware that the published literature overestimates the effects of the predominant treatments for depression

    Adam33 polymorphisms are associated with COPD and lung function in long-term tobacco smokers

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Variation in ADAM33 has been shown to be important in the development of asthma and altered lung function. This relationship however, has not been investigated in the population susceptible to COPD; long term tobacco smokers. We evaluated the association between polymorphisms in ADAM33 gene with COPD and lung function in long term tobacco smokers.</p> <p>Methods</p> <p>Caucasian subjects, at least 50 year old, who smoked ≥ 20 pack-years (n = 880) were genotyped for 25 single nucleotide polymorphisms (SNPs) in ADAM33. COPD was defined as an FEV1/FVC ratio < 70% and percent-predicted (pp)FEV1 < 75% (n = 287). The control group had an FEV1/FVC ratio ≥ 70% and ppFEV<sub>1 </sub>≥ 80% (n = 311) despite ≥ 20 pack years of smoking. Logistic and linear regressions were used for the analysis. Age, sex, and smoking status were considered as potential confounders.</p> <p>Results</p> <p>Five SNPs in ADAM33 were associated with COPD (Q-1, intronic: p < 0.003; S1, Ile → Val: p < 0.003; S2, Gly → Gly: p < 0.04; V-1 intronic: p < 0.002; V4, in 3' untranslated region: p < 0.007). Q-1, S1 and V-1 were also associated with ppFEV1, FEV1/FVC ratio and ppFEF25–75 (p values 0.001 – 0.02). S2 was associated with FEV1/FVC ratio (p < 0.05). The association between S1 and residual volume revealed a trend toward significance (p value < 0.07). Linkage disequilibrium and haplotype analyses suggested that S1 had the strongest degree of association with COPD and pulmonary function abnormalities.</p> <p>Conclusion</p> <p>Five SNPs in ADAM33 were associated with COPD and lung function in long-term smokers. Functional studies will be needed to evaluate the biologic significance of these polymorphisms in the pathogenesis of COPD.</p

    Systematic review and meta-analysis of the diagnostic accuracy of ultrasonography for deep vein thrombosis

    Get PDF
    Background Ultrasound (US) has largely replaced contrast venography as the definitive diagnostic test for deep vein thrombosis (DVT). We aimed to derive a definitive estimate of the diagnostic accuracy of US for clinically suspected DVT and identify study-level factors that might predict accuracy. Methods We undertook a systematic review, meta-analysis and meta-regression of diagnostic cohort studies that compared US to contrast venography in patients with suspected DVT. We searched Medline, EMBASE, CINAHL, Web of Science, Cochrane Database of Systematic Reviews, Cochrane Controlled Trials Register, Database of Reviews of Effectiveness, the ACP Journal Club, and citation lists (1966 to April 2004). Random effects meta-analysis was used to derive pooled estimates of sensitivity and specificity. Random effects meta-regression was used to identify study-level covariates that predicted diagnostic performance. Results We identified 100 cohorts comparing US to venography in patients with suspected DVT. Overall sensitivity for proximal DVT (95% confidence interval) was 94.2% (93.2 to 95.0), for distal DVT was 63.5% (59.8 to 67.0), and specificity was 93.8% (93.1 to 94.4). Duplex US had pooled sensitivity of 96.5% (95.1 to 97.6) for proximal DVT, 71.2% (64.6 to 77.2) for distal DVT and specificity of 94.0% (92.8 to 95.1). Triplex US had pooled sensitivity of 96.4% (94.4 to 97.1%) for proximal DVT, 75.2% (67.7 to 81.6) for distal DVT and specificity of 94.3% (92.5 to 95.8). Compression US alone had pooled sensitivity of 93.8 % (92.0 to 95.3%) for proximal DVT, 56.8% (49.0 to 66.4) for distal DVT and specificity of 97.8% (97.0 to 98.4). Sensitivity was higher in more recently published studies and in cohorts with higher prevalence of DVT and more proximal DVT, and was lower in cohorts that reported interpretation by a radiologist. Specificity was higher in cohorts that excluded patients with previous DVT. No studies were identified that compared repeat US to venography in all patients. Repeat US appears to have a positive yield of 1.3%, with 89% of these being confirmed by venography. Conclusion Combined colour-doppler US techniques have optimal sensitivity, while compression US has optimal specificity for DVT. However, all estimates are subject to substantial unexplained heterogeneity. The role of repeat scanning is very uncertain and based upon limited data

    Hypoxia-inducible factor-1α gene polymorphisms and cancer risk: a meta-analysis

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The results from the published studies on the association between <it>hypoxia-inducible factor -1α </it>(HIF-1α) polymorphisms and cancer risk are conflicting. In this meta-analysis, we aimed to investigate the association between <it>HIF-1α </it>1772 C/T and 1790 G/A polymorphisms and cancer.</p> <p>Methods</p> <p>The meta-analysis for 1772 C/T polymorphism included 4131 cancer cases and 5387 controls, and for 1790 G/A polymorphism included 2058 cancer cases and 3026 controls. Allelic and genotypic comparisons between cases and controls were evaluated. Subgroup analyses by cancer types, ethnicity, and gender were also performed. We included prostate cancer in male subgroup, and female specific cancers in female subgroup.</p> <p>Results</p> <p>For the 1772 C/T polymorphism, the analysis showed that the T allele and genotype TT were significantly associated with higher cancer risk: odds ratio (OR) = 1.29 [95% confidence interval (CI, 1.01, 1.65)], P = 0.04, P<sub>heterogeneity </sub>< 0.00001, and OR = 2.18 [95% CI (1.32, 3.62)], P = 0.003, P<sub>heterogeneity </sub>= 0.02, respectively. The effect of the genotype TT on cancer especially exists in Caucasians and female subjects: OR = 2.40 [95% CI (1.26, 4.59)], P = 0.008, P<sub>heterogeneity </sub>= 0.02, and OR = 3.60 [95% CI (1.17, 11.11)], P = 0.03, P<sub>heterogeneity </sub>= 0.02, respectively. For the 1790 G/A polymorphism, the pooled ORs for allelic frequency comparison and dominant model comparison suggested a significant association of 1790 G/A polymorphism with a decreased breast cancer risk: OR = 0.28 [95% CI (0.08, 0.90)], P = 0.03, P<sub>heterogeneity </sub>= 0.45, and OR = 0.29 [95% CI (0.09, 0.97)], P = 0.04, P<sub>heterogeneity </sub>= 0.41, respectively. The frequency of the <it>HIF-1α </it>1790 A allele was very low and only two studies were included in the breast cancer subgroup.</p> <p>Conclusions</p> <p>Our meta-analysis suggests that the <it>HIF-1α </it>1772 C/T polymorphism is significantly associated with higher cancer risk, and 1790 G/A polymorphism is significantly associated with decreased breast cancer risk. The effect of the 1772 C/T polymorphism on cancer especially exists in Caucasians and female subjects. Only female specific cancers were included in female subgroup, which indicates that the 1772 C/T polymorphism is significantly associated with an increased risk for female specific cancers. The association between the 1790 G/A polymorphism and lower breast cancer risk could be due to chance.</p

    Association of dialysis facility-level hemoglobin measurement and erythropoiesis-stimulating agent dose adjustment frequencies with dialysis facility-level hemoglobin variation: a retrospective analysis

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>A key goal of anemia management in dialysis patients is to maintain patients' hemoglobin (Hb) levels consistently within a target range. Our aim in this study was to assess the association of facility-level practice patterns representing Hb measurement and erythropoiesis-stimulating agent (ESA) dose adjustment frequencies with facility-level Hb variation.</p> <p>Methods</p> <p>This was a retrospective observational database analysis of patients in dialysis facilities affiliated with large dialysis organizations as of July 01, 2006, covering a follow-up period from July 01, 2006 to June 30, 2009. A total of 2,763 facilities representing 436,442 unique patients were included. The predictors evaluated were facility-level Hb measurement and ESA dose adjustment frequencies, and the outcome measured was facility-level Hb variation.</p> <p>Results</p> <p>First to 99th percentile ranges for facility-level Hb measurement and ESA dose adjustment frequencies were approximately once per month to once per week and approximately once per 3 months to once per 3 weeks, respectively. Facility-level Hb measurement and ESA dose adjustment frequencies were inversely associated with Hb variation. Modeling results suggested that a more frequent Hb measurement (once per week rather than once per month) was associated with approximately 7% to 9% and 6% to 8% gains in the proportion of patients with Hb levels within a ±1 and ±2 g/dL range around the mean, respectively. Similarly, more frequent ESA dose adjustment (once per 2 weeks rather than once per 3 months) was associated with approximately 6% to 9% and 5% to 7% gains in the proportion of patients in these respective Hb ranges.</p> <p>Conclusions</p> <p>Frequent Hb measurements and timely ESA dose adjustments in dialysis patients are associated with lower facility-level Hb variation and an increase in proportion of patients within ±1 and ±2 g/dL ranges around the facility-level Hb mean.</p
    corecore