33 research outputs found
Differing Methodologies Are Required to Estimate Prevalence of Dementia: Single Study Types Are No Longer Reliable
Abstract: Population-based surveys were used to estimate community prevalence of dementia, but have low response fractions due, among other things, to difficulties in obtaining informed consent from people with diminished capacity. Cohort studies of younger people are subject to recruitment bias and non-random drop-outs. Dementia registries can delineate sub-types of dementia but have limited population coverage and are costly to maintain. Administrative datasets have low costs but may be subject to selection bias and uncertain sensitivity. We propose that astute combination of methodologies, including assessment of coverage and validity of administrative datasets, is the most cost-effective process to estimate and monitor community prevalence
Methodological strengths and weakness of cohorts and administrative data for developing population estimates of dementia
Background: There are three main methods of obtaining population data on the incidence and/or prevalence of dementia: cross-sectional surveys (which may be repeated over time); cohort studies that follow people initially without dementia and count newly diagnosed cases over time; and administrative health records (including linkage of records from multiple sources). The major challenges for all these methods are: how well the study sample represents the target population, the accuracy of diagnoses, and the costs of maintaining the data collection over time. Method: In a project to improve Australia’s dementia statistics, we conducted a series of studies to compare population estimates of dementia obtained using different methods. Firstly, we used existing general health studies of community-based cohorts, supplemented by linkage to administrative records of hospital and emergency department admissions, assessments for aged care support, medication prescriptions, and death certificates to estimate the cumulative incidence of dementia. Secondly, we created cohorts based on administrative records for entire populations. Thirdly, we assessed the validity of the identification of people with dementia in the record linkage cohorts in various ways, including linkage with studies that had obtained clinical diagnosis through the standardised assessment of participants. Result: We will present empirical results illustrating the strengths and limitations of these different approaches. In summary, community-based cohort studies lack representativeness of national or regional populations due to recruitment biases and differential loss to follow-up. Cohort studies are also costly to maintain over the long time needed for participants to develop dementia. In contrast, the use of administrative records is relatively inexpensive, but is subject to policy changes that impact on the continuity of data coverage and quality. Population coverage may also be problematic for administrative data if important sources of care for people with dementia are not included; for example, in Australia linkable primary care data are not available. The validation studies showed that accuracy was highly dependent on data sources, and identification of dementia type was unreliable. Conclusion: Prevalence and trends data of dementia obtained from multiple sources are needed to provide accurate population estimates, together with detailed contextual knowledge and careful analysis
Subendocardial contractile impairment in chronic ischemic myocardium: assessment by strain analysis of 3T tagged CMR
<p>Abstract</p> <p>Background</p> <p>The purpose of this study was to quantify myocardial strain on the subendocardial and epicardial layers of the left ventricle (LV) using tagged cardiovascular magnetic resonance (CMR) and to investigate the transmural degree of contractile impairment in the chronic ischemic myocardium.</p> <p>Methods</p> <p>3T tagged CMR was performed at rest in 12 patients with severe coronary artery disease who had been scheduled for coronary artery bypass grafting. Circumferential strain (C-strain) at end-systole on subendocardial and epicardial layers was measured using the short-axis tagged images of the LV and available software (Intag; Osirix). The myocardial segment was divided into stenotic and non-stenotic segments by invasive coronary angiography, and ischemic and non-ischemic segments by stress myocardial perfusion scintigraphy. The difference in C-strain between the two groups was analyzed using the Mann-Whitney U-test. The diagnostic capability of C-strain was analyzed using receiver operating characteristics analysis.</p> <p>Results</p> <p>The absolute subendocardial C-strain was significantly lower for stenotic (-7.5 ± 12.6%) than non-stenotic segment (-18.8 ± 10.2%, p < 0.0001). There was no difference in epicardial C-strain between the two groups. Use of cutoff thresholds for subendocardial C-strain differentiated stenotic segments from non-stenotic segments with a sensitivity of 77%, a specificity of 70%, and areas under the curve (AUC) of 0.76. The absolute subendocardial C-strain was significantly lower for ischemic (-6.7 ± 13.1%) than non-ischemic segments (-21.6 ± 7.0%, p < 0.0001). The absolute epicardial C-strain was also significantly lower for ischemic (-5.1 ± 7.8%) than non-ischemic segments (-9.6 ± 9.1%, p < 0.05). Use of cutoff thresholds for subendocardial C-strain differentiated ischemic segments from non-ischemic segments with sensitivities of 86%, specificities of 84%, and AUC of 0.86.</p> <p>Conclusions</p> <p>Analysis of tagged CMR can non-invasively demonstrate predominant impairment of subendocardial strain in the chronic ischemic myocardium at rest.</p
Label-free, multi-scale imaging of ex-vivo mouse brain using spatial light interference microscopy
Brain connectivity spans over broad spatial scales, from nanometers to centimeters. In order to understand the brain at multi-scale, the neural network in wide-field has been visualized in detail by taking advantage of light microscopy. However, the process of staining or addition of fluorescent tags is commonly required, and the image contrast is insufficient for delineation of cytoarchitecture. To overcome this barrier, we use spatial light interference microscopy to investigate brain structure with high-resolution, sub-nanometer pathlength sensitivity without the use of exogenous contrast agents. Combining wide-field imaging and a mosaic algorithm developed in-house, we show the detailed architecture of cells and myelin, within coronal olfactory bulb and cortical sections, and from sagittal sections of the hippocampus and cerebellum. Our technique is well suited to identify laminar characteristics of fiber tract orientation within white matter, e.g. the corpus callosum. To further improve the macro-scale contrast of anatomical structures, and to better differentiate axons and dendrites from cell bodies, we mapped the tissue in terms of its scattering property. Based on our results, we anticipate that spatial light interference microscopy can potentially provide multiscale and multicontrast perspectives of gross and microscopic brain anatomy.ope
STatistically Assigned Response Criteria in Solid Tumors (STARCIST)
BACKGROUND: Several reproducibility studies have established good test-retest reliability of FDG-PET in various oncology settings. However, these studies are based on relatively short inter-scan periods of 1-3 days while, in contrast, response assessments based on FDG-PET in early phase drug trials are typically made over an interval of 2-3 weeks during the first treatment cycle. With focus on longer, on-treatment scan intervals, we develop a data-driven approach to calculate baseline-specific cutoff values to determine patient-level changes in glucose uptake that are unlikely to be explained by random variability. Our method takes into account the statistical nature of natural fluctuations in SUV as well as potential bias effects. METHODS: To assess variability in SUV over clinically relevant scan intervals for clinical trials, we analyzed baseline and follow-up FDG-PET scans with a median scan interval of 21 days from 53 advanced stage cancer patients enrolled in a Phase 1 trial. The 53 patients received a sub-pharmacologic drug dose and the trial data is treated as a 'test-retest' data set. A simulation-based tool is presented which takes as input baseline lesion SUVmax values, the variance of spurious changes in SUVmax between scans, the desired Type I error rate, and outputs lesion and patient based cut-off values. Bias corrections are included to account for variations in tracer uptake time. RESULTS: In the training data, changes in SUVmax follow an approximately zero-mean Gaussian distribution with constant variance across levels of the baseline measurements. Because of constant variance, the coefficient of variation is a decreasing function of the magnitude of baseline SUVmax. This finding is consistent with published results, but our data shows greater variability. Application of our method to NSCLC patients treated with erlotinib produces results distinct from those based on the EORTC criteria. Based on data presented here as well as previous repeatability studies, the proposed method has greater statistical power to detect a significant %-decrease on SUVmax compared to published criteria relying on symmetric thresholds. CONCLUSIONS: Defining patient-specific, baseline dependent cut-off values based on the (null) distribution of naturally occurring fluctuations in glucose uptake enable identification of statistically significant changes in SUVmax. For lower baseline values, the produced cutoff values are notably asymmetric with relatively large changes (e.g. >50 %) required for statistical significance. For use with prospectively defined endpoints, the developed method enables the use of one-armed trials to detect pharmacodynamic drug effects based on FDG-PET. The clinical importance of changes in SUVmax is likely to remain dependent on both tumor biology and the type of treatment