71 research outputs found

    Systematic comparative validation of self-report measures of sedentary time against an objective measure of postural sitting (activPAL)

    Get PDF
    Background: Sedentary behaviour is a public health concern that requires surveillance and epidemiological research. For such large scale studies, self-report tools are a pragmatic measurement solution. A large number of self-report tools are currently in use, but few have been validated against an objective measure of sedentary time and there is no comparative information between tools to guide choice or to enable comparison between studies. The aim of this study was to provide a systematic comparison, generalisable to all tools, of the validity of self-report measures of sedentary time against a gold standard sedentary time objective monitor. Methods: Cross sectional data from three cohorts (N = 700) were used in this validation study. Eighteen self-report measures of sedentary time, based on the TAxonomy of Self-report SB Tools (TASST) framework, were compared against an objective measure of postural sitting (activPAL) to provide information, generalizable to all existing tools, on agreement and precision using Bland-Altman statistics, on criterion validity using Pearson correlation, and on data loss. Results: All self-report measures showed poor accuracy compared with the objective measure of sedentary time, with very wide limits of agreement and poor precision (random error > 2.5 h). Most tools under-reported total sedentary time and demonstrated low correlations with objective data. The type of assessment used by the tool, whether direct, proxy, or a composite measure, influenced the measurement characteristics. Proxy measures (TV time) and single item direct measures using a visual analogue scale to assess the proportion of the day spent sitting, showed the best combination of precision and data loss. The recall period (e.g. previous week) had little influence on measurement characteristics. Conclusion: Self-report measures of sedentary time result in large bias, poor precision and low correlation with an objective measure of sedentary time. Choice of tool depends on the research context, design and question. Choice can be guided by this systematic comparative validation and, in the case of population surveillance, it recommends to use a visual analog scale and a 7 day recall period. Comparison between studies and improving population estimates of average sedentary time, is possible with the comparative correction factors provided

    Loss of chromosome 10 is an independent prognostic factor in high-grade gliomas

    Get PDF
    Loss of heterozygosity (LOH) for chromosome 10 is the most frequent genetic abnormality observed in high-grade gliomas. We have used fluorescent microsatellite markers to examine a series of 83 patients, 34 with anaplastic astrocytoma (grade 3) and 49 with glioblastoma multiforme (grade 4), for LOH of chromosome 10. Genotype analysis revealed LOH for all informative chromosome 10 markers in 12 (35%) of patients with grade 3 and 29 (59%) grade 4 tumours respectively, while partial LOH was found in a further eight (24%) grade 3 and ten (20%) grade 4 tumours. Partial LOH, was confined to the long arm (10q) in six and the short arm (10p) in three cases, while alleles from both arms were lost in four cases. Five tumours (one grade 3 and four grade 4) showed heterogeneity with respect to loss at different loci. There was a correlation between any chromosome 10 loss and poorer performance status at presentation (χ2P = 0.005) and with increasing age at diagnosis (Mann–Whitney U-test P = 0.034) but not with tumour grade (χ2P = 0.051). A Cox multivariate model for survival duration identified age (proportional hazards (PH), P = 0.004), grade (PH, P = 0.012) and any loss of chromosome 10 (PH, P = 0.009) as the only independent prognostic variables. Specifically, LOH for chromosome 10 was able to identify a subgroup of patients with grade 3 tumours who had a significantly shorter survival time. We conclude that LOH for chromosome 10 is an independent, adverse prognostic variable in high-grade glioma. © 1999 Cancer Research Campaig

    Proliferation and aneusomy predict survival of young patients with astrocytoma grade II

    Get PDF
    The clinical course of astrocytoma grade II (AII) is highly variable and not reflected by histological characteristics. As one of the best prognostic factors, higher age identifies rapid progressive A II. For patients over 35 years of age, an aggressive treatment is normally propagated. For patients under 35 years, there is no clear guidance for treatment choices, and therefore also the necessity of histopathological diagnosis is often questioned. We studied the additional prognostic value of the proliferation index and the detection of genetic aberrations for patients with A II. The tumour samples were obtained by stereotactic biopsy or tumour resection and divided into two age groups, that is 18–34 years (n=19) and 35 years (n=28). Factors tested included the proliferation (Ki-67) index, and numerical aberrations for chromosomes 1, 7, and 10, as detected by in situ hybridisation (ISH). The results show that age is a prognostic indicator when studied in the total patient group, with patients above 35 years showing a relatively poor prognosis. Increased proliferation index in the presence of aneusomy appears to identify a subgroup of patients with poor prognosis more accurately than predicted by proliferation index alone. We conclude that histologically classified cases of A II comprise a heterogeneous group of tumours with different biological and genetic constitution, which exhibit a highly variable clinical course. Immunostaining for Ki-67 in combination with the detection of aneusomy by ISH allows the identification of a subgroup of patients with rapidly progressive A II. This is an extra argument not to defer stereotactic biopsy in young patients with radiological suspicion of A II

    Validity of Resting Energy Expenditure Predictive Equations before and after an Energy-Restricted Diet Intervention in Obese Women

    Get PDF
    Background We investigated the validity of REE predictive equations before and after 12-week energy-restricted diet intervention in Spanish obese (30 kg/m2>BMI<40 kg/m2) women. Methods We measured REE (indirect calorimetry), body weight, height, and fat mass (FM) and fat free mass (FFM, dual X-ray absorptiometry) in 86 obese Caucasian premenopausal women aged 36.7±7.2 y, before and after (n = 78 women) the intervention. We investigated the accuracy of ten REE predictive equations using weight, height, age, FFM and FM. Results At baseline, the most accurate equation was the Mifflin et al. (Am J Clin Nutr 1990; 51: 241–247) when using weight (bias:−0.2%, P = 0.982), 74% of accurate predictions. This level of accuracy was not reached after the diet intervention (24% accurate prediction). After the intervention, the lowest bias was found with the Owen et al. (Am J Clin Nutr 1986; 44: 1–19) equation when using weight (bias:−1.7%, P = 0.044), 81% accurate prediction, yet it provided 53% accurate predictions at baseline. Conclusions There is a wide variation in the accuracy of REE predictive equations before and after weight loss in non-morbid obese women. The results acquire especial relevance in the context of the challenging weight regain phenomenon for the overweight/obese population.The present study was supported by the University of the Basque Country (UPV 05/80), Social Foundation of the Caja Vital- Kutxa and by the Department of Health of the Government of the Basque Country (2008/111062), and by the Spanish Ministry of Science and Innovation (RYC-2010-05957)

    Criteria for the use of omics-based predictors in clinical trials: Explanation and elaboration

    Get PDF
    High-throughput 'omics' technologies that generate molecular profiles for biospecimens have been extensively used in preclinical studies to reveal molecular subtypes and elucidate the biological mechanisms of disease, and in retrospective studies on clinical specimens to develop mathematical models to predict clinical endpoints. Nevertheless, the translation of these technologies into clinical tests that are useful for guiding management decisions for patients has been relatively slow. It can be difficult to determine when the body of evidence for an omics-based test is sufficiently comprehensive and reliable to support claims that it is ready for clinical use, or even that it is ready for definitive evaluation in a clinical trial in which it may be used to direct patient therapy. Reasons for this difficulty include the exploratory and retrospective nature of many of these studies, the complexity of these assays and their application to clinical specimens, and the many potential pitfalls inherent in the development of mathematical predictor models from the very high-dimensional data generated by these omics technologies. Here we present a checklist of criteria to consider when evaluating the body of evidence supporting the clinical use of a predictor to guide patient therapy. Included are issues pertaining to specimen and assay requirements, the soundness of the process for developing predictor models, expectations regarding clinical study design and conduct, and attention to regulatory, ethical, and legal issues. The proposed checklist should serve as a useful guide to investigators preparing proposals for studies involving the use of omics-based tests. The US National Cancer Institute plans to refer to these guidelines for review of proposals for studies involving omics tests, and it is hoped that other sponsors will adopt the checklist as well. © 2013 McShane et al.; licensee BioMed Central Ltd
    • …
    corecore