421 research outputs found

    Randomised trials at the level of the individual

    Get PDF
    In global health research, short-term, small-scale clinical trials with fixed, two-arm trial designs that generally do not allow for major changes throughout the trial are the most common study design. Building on the introductory paper of this Series, this paper discusses data-driven approaches to clinical trial research across several adaptive trial designs, as well as the master protocol framework that can help to harmonise clinical trial research efforts in global health research. We provide a general framework for more efficient trial research, and we discuss the importance of considering different study designs in the planning stage with statistical simulations. We conclude this second Series paper by discussing the methodological and operational complexity of adaptive trial designs and master protocols and the current funding challenges that could limit uptake of these approaches in global health research

    Applications of the cumulative rate to kidney cancer statistics in Australia

    Get PDF
    Cancer incidence and mortality statistics in two populations are usually compared by using either the age-standardised rate or the cumulative risk by a certain age. We argue that the cumulative rate is a superior measure because it obviates the need for a standard population, and is not open to misinterpretation as is the case for cumulative risk. Then we illustrate the application of the cumulative rate by analysing incidence and mortality data for kidney cancer in Australia using the cumulative rate. Kidney cancer, which is also known as malignant neoplasm of kidney, is one of the less common cancers in Australia. In 2012, approximately 2.5% of all new cases of cancer were kidney cancer, and approximately 2.1% of all cancer related deaths in Australia were due to kidney cancer. There is variation in incidence and mortality by sex, age, and geographical location in Australia. We examine how the cumulative rate performs in measuring the variation of this disease across such sub-populations. This is part of our eïżœort to promote the use of the cumulative rate as an alternative to the age-standardised rates or cumulative risk. In addition we hope that this statistical investigation will contribute to the aetiology of the disease from an Australian perspective

    Urgently seeking efficiency and sustainability of clinical trials in global health

    Get PDF
    This paper shows the scale of global health research and the context in which we frame the subsequent papers in the Series. In this Series paper, we provide a historical perspective on clinical trial research by revisiting the 1948 streptomycin trial for pulmonary tuberculosis, which was the first documented randomised clinical trial in the English language, and we discuss its close connection with global health. We describe the current state of clinical trial research globally by providing an overview of clinical trials that have been registered in the WHO International Clinical Trial Registry since 2010. We discuss challenges with current trial planning and designs that are often used in clinical trial research undertaken in low-income and middle-income countries, as an overview of the global health trials landscape. Finally, we discuss the importance of collaborative work in global health research towards generating sustainable and culturally appropriate research environments

    Learning lessons from field surveys in humanitarian contexts: a case study of field surveys conducted in North Kivu, DRC 2006-2008

    Get PDF
    Survey estimates of mortality and malnutrition are commonly used to guide humanitarian decision-making. Currently, different methods of conducting field surveys are the subject of debate among epidemiologists. Beyond the technical arguments, decision makers may find it difficult to conceptualize what the estimates actually mean. For instance, what makes this particular situation an emergency? And how should the operational response be adapted accordingly. This brings into question not only the quality of the survey methodology, but also the difficulties epidemiologists face in interpreting results and selecting the most important information to guide operations. As a case study, we reviewed mortality and nutritional surveys conducted in North Kivu, Democratic Republic of Congo (DRC) published from January 2006 to January 2009. We performed a PubMed/Medline search for published articles and scanned publicly available humanitarian databases and clearinghouses for grey literature. To evaluate the surveys, we developed minimum reporting criteria based on available guidelines and selected peer-review articles. We identified 38 reports through our search strategy; three surveys met our inclusion criteria. The surveys varied in methodological quality. Reporting against minimum criteria was generally good, but presentation of ethical procedures, raw data and survey limitations were missed in all surveys. All surveys also failed to consider contextual factors important for data interpretation. From this review, we conclude that mechanisms to ensure sound survey design and conduct must be implemented by operational organisations to improve data quality and reporting. Training in data interpretation would also be useful. Novel survey methods should be trialled and prospective data gathering (surveillance) employed wherever feasible

    The combination of autofluorescence endoscopy and molecular biomarkers is a novel diagnostic tool for dysplasia in Barrett's oesophagus.

    Get PDF
    OBJECTIVE: Endoscopic surveillance for Barrett's oesophagus (BO) is limited by sampling error and the subjectivity of diagnosing dysplasia. We aimed to compare a biomarker panel on minimal biopsies directed by autofluorescence imaging (AFI) with the standard surveillance protocol to derive an objective tool for dysplasia assessment. DESIGN: We performed a cross-sectional prospective study in three tertiary referral centres. Patients with BO underwent high-resolution endoscopy followed by AFI-targeted biopsies. 157 patients completed the biopsy protocol. Aneuploidy/tetraploidy; 9p and 17p loss of heterozygosity; RUNX3, HPP1 and p16 methylation; p53 and cyclin A immunohistochemistry were assessed. Bootstrap resampling was used to select the best diagnostic biomarker panel for high-grade dysplasia (HGD) and early cancer (EC). This panel was validated in an independent cohort of 46 patients. RESULTS: Aneuploidy, p53 immunohistochemistry and cyclin A had the strongest association with dysplasia in the per-biopsy analysis and, as a panel, had an area under the receiver operating characteristic curve of 0.97 (95% CI 0.95 to 0.99) for diagnosing HGD/EC. The diagnostic accuracy for HGD/EC of the three-biomarker panel from AFI+ areas was superior to AFI- areas (p<0.001). Compared with the standard protocol, this panel had equal sensitivity for HGD/EC, with a 4.5-fold reduction in the number of biopsies. In an independent cohort of patients, the panel had a sensitivity and specificity for HGD/EC of 100% and 85%, respectively. CONCLUSIONS: A three-biomarker panel on a small number of AFI-targeted biopsies provides an accurate and objective diagnosis of dysplasia in BO. The clinical implications have to be studied further

    Key issues in recruitment to randomised controlled trials with very different interventions: a qualitative investigation of recruitment to the SPARE trial (CRUK/07/011)

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Recruitment to randomised controlled trials (RCTs) with very different treatment arms is often difficult. The ProtecT (Prostate testing for cancer and Treatment) study successfully used qualitative research methods to improve recruitment and these methods were replicated in five other RCTs facing recruitment difficulties. A similar qualitative recruitment investigation was undertaken in the SPARE (Selective bladder Preservation Against Radical Excision) feasibility study to explore reasons for low recruitment and attempt to improve recruitment rates by implementing changes suggested by qualitative findings.</p> <p>Methods</p> <p>In Phase I of the investigation, reasons for low levels of recruitment were explored through content analysis of RCT documents, thematic analysis of interviews with trial staff and recruiters, and conversation analysis of audio-recordings of recruitment appointments. Findings were presented to the trial management group and a plan of action was agreed. In Phase II, changes to design and conduct were implemented, with training and feedback provided for recruitment staff.</p> <p>Results</p> <p>Five key challenges to trial recruitment were identified in Phase I: (a) Investigators and recruiters had considerable difficulty articulating the trial design in simple terms; (b) The recruitment pathway was complicated, involving staff across different specialties/centres and communication often broke down; (c) Recruiters inadvertently used 'loaded' terminology such as 'gold standard' in study information, leading to unbalanced presentation; (d) Fewer eligible patients were identified than had been anticipated; (e) Strong treatment preferences were expressed by potential participants and trial staff in some centres. In Phase II, study information (patient information sheet and flowchart) was simplified, the recruitment pathway was focused around lead recruiters, and training sessions and 'tips' were provided for recruiters. Issues of patient eligibility were insurmountable, however, and the independent Trial Steering Committee advised closure of the SPARE trial in February 2010.</p> <p>Conclusions</p> <p>The qualitative investigation identified the key aspects of trial design and conduct that were hindering recruitment, and a plan of action that was acceptable to trial investigators and recruiters was implemented. Qualitative investigations can thus be used to elucidate challenges to recruitment in trials with very different treatment arms, but require sufficient time to be undertaken successfully.</p> <p>Trial Registration</p> <p>CRUK/07/011; <a href="http://www.controlled-trials.com/ISRCTN61126465">ISRCTN61126465</a></p

    Does the Effectiveness of Control Measures Depend on the Influenza Pandemic Profile?

    Get PDF
    BACKGROUND: Although strategies to contain influenza pandemics are well studied, the characterization and the implications of different geographical and temporal diffusion patterns of the pandemic have been given less attention. METHODOLOGY/MAIN FINDINGS: Using a well-documented metapopulation model incorporating air travel between 52 major world cities, we identified potential influenza pandemic diffusion profiles and examined how the impact of interventions might be affected by this heterogeneity. Clustering methods applied to a set of pandemic simulations, characterized by seven parameters related to the conditions of emergence that were varied following Latin hypercube sampling, were used to identify six pandemic profiles exhibiting different characteristics notably in terms of global burden (from 415 to >160 million of cases) and duration (from 26 to 360 days). A multivariate sensitivity analysis showed that the transmission rate and proportion of susceptibles have a strong impact on the pandemic diffusion. The correlation between interventions and pandemic outcomes were analyzed for two specific profiles: a fast, massive pandemic and a slow building, long-lasting one. In both cases, the date of introduction for five control measures (masks, isolation, prophylactic or therapeutic use of antivirals, vaccination) correlated strongly with pandemic outcomes. Conversely, the coverage and efficacy of these interventions only moderately correlated with pandemic outcomes in the case of a massive pandemic. Pre-pandemic vaccination influenced pandemic outcomes in both profiles, while travel restriction was the only measure without any measurable effect in either. CONCLUSIONS: our study highlights: (i) the great heterogeneity in possible profiles of a future influenza pandemic; (ii) the value of being well prepared in every country since a pandemic may have heavy consequences wherever and whenever it starts; (iii) the need to quickly implement control measures and even to anticipate pandemic emergence through pre-pandemic vaccination; and (iv) the value of combining all available control measures except perhaps travel restrictions

    Scientific Application Requirements for Leadership Computing at the Exascale

    Get PDF
    The Department of Energy s Leadership Computing Facility, located at Oak Ridge National Laboratory s National Center for Computational Sciences, recently polled scientific teams that had large allocations at the center in 2007, asking them to identify computational science requirements for future exascale systems (capable of an exaflop, or 1018 floating point operations per second). These requirements are necessarily speculative, since an exascale system will not be realized until the 2015 2020 timeframe, and are expressed where possible relative to a recent petascale requirements analysis of similar science applications [1]. Our initial findings, which beg further data collection, validation, and analysis, did in fact align with many of our expectations and existing petascale requirements, yet they also contained some surprises, complete with new challenges and opportunities. First and foremost, the breadth and depth of science prospects and benefits on an exascale computing system are striking. Without a doubt, they justify a large investment, even with its inherent risks. The possibilities for return on investment (by any measure) are too large to let us ignore this opportunity. The software opportunities and challenges are enormous. In fact, as one notable computational scientist put it, the scale of questions being asked at the exascale is tremendous and the hardware has gotten way ahead of the software. We are in grave danger of failing because of a software crisis unless concerted investments and coordinating activities are undertaken to reduce and close this hardwaresoftware gap over the next decade. Key to success will be a rigorous requirement for natural mapping of algorithms to hardware in a way that complements (rather than competes with) compilers and runtime systems. The level of abstraction must be raised, and more attention must be paid to functionalities and capabilities that incorporate intent into data structures, are aware of memory hierarchy, possess fault tolerance, exploit asynchronism, and are power-consumption aware. On the other hand, we must also provide application scientists with the ability to develop software without having to become experts in the computer science components. Numerical algorithms are scattered broadly across science domains, with no one particular algorithm being ubiquitous and no one algorithm going unused. Structured grids and dense linear algebra continue to dominate, but other algorithm categories will become more common. A significant increase is projected for Monte Carlo algorithms, unstructured grids, sparse linear algebra, and particle methods, and a relative decrease foreseen in fast Fourier transforms. These projections reflect the expectation of much higher architecture concurrency and the resulting need for very high scalability. The new algorithm categories that application scientists expect to be increasingly important in the next decade include adaptive mesh refinement, implicit nonlinear systems, data assimilation, agent-based methods, parameter continuation, and optimization. The attributes of leadership computing systems expected to increase most in priority over the next decade are (in order of importance) interconnect bandwidth, memory bandwidth, mean time to interrupt, memory latency, and interconnect latency. The attributes expected to decrease most in relative priority are disk latency, archival storage capacity, disk bandwidth, wide area network bandwidth, and local storage capacity. These choices by application developers reflect the expected needs of applications or the expected reality of available hardware. One interpretation is that the increasing priorities reflect the desire to increase computational efficiency to take advantage of increasing peak flops [floating point operations per second], while the decreasing priorities reflect the expectation that computational efficiency will not increase. Per-core requirements appear to be relatively static, while aggregate requirements will grow with the system. This projection is consistent with a relatively small increase in performance per core with a dramatic increase in the number of cores. Leadership system software must face and overcome issues that will undoubtedly be exacerbated at the exascale. The operating system (OS) must be as unobtrusive as possible and possess more stability, reliability, and fault tolerance during application execution. As applications will be more likely at the exascale to experience loss of resources during an execution, the OS must mitigate such a loss with a range of responses. New fault tolerance paradigms must be developed and integrated into applications. Just as application input and output must not be an afterthought in hardware design, job management, too, must not be an afterthought in system software design. Efficient scheduling of those resources will be a major obstacle faced by leadership computing centers at the exas..

    LSST: from Science Drivers to Reference Design and Anticipated Data Products

    Get PDF
    (Abridged) We describe here the most ambitious survey currently planned in the optical, the Large Synoptic Survey Telescope (LSST). A vast array of science will be enabled by a single wide-deep-fast sky survey, and LSST will have unique survey capability in the faint time domain. The LSST design is driven by four main science themes: probing dark energy and dark matter, taking an inventory of the Solar System, exploring the transient optical sky, and mapping the Milky Way. LSST will be a wide-field ground-based system sited at Cerro Pach\'{o}n in northern Chile. The telescope will have an 8.4 m (6.5 m effective) primary mirror, a 9.6 deg2^2 field of view, and a 3.2 Gigapixel camera. The standard observing sequence will consist of pairs of 15-second exposures in a given field, with two such visits in each pointing in a given night. With these repeats, the LSST system is capable of imaging about 10,000 square degrees of sky in a single filter in three nights. The typical 5σ\sigma point-source depth in a single visit in rr will be ∌24.5\sim 24.5 (AB). The project is in the construction phase and will begin regular survey operations by 2022. The survey area will be contained within 30,000 deg2^2 with ÎŽ<+34.5∘\delta<+34.5^\circ, and will be imaged multiple times in six bands, ugrizyugrizy, covering the wavelength range 320--1050 nm. About 90\% of the observing time will be devoted to a deep-wide-fast survey mode which will uniformly observe a 18,000 deg2^2 region about 800 times (summed over all six bands) during the anticipated 10 years of operations, and yield a coadded map to r∌27.5r\sim27.5. The remaining 10\% of the observing time will be allocated to projects such as a Very Deep and Fast time domain survey. The goal is to make LSST data products, including a relational database of about 32 trillion observations of 40 billion objects, available to the public and scientists around the world.Comment: 57 pages, 32 color figures, version with high-resolution figures available from https://www.lsst.org/overvie

    Spatial Organization and Molecular Correlation of Tumor-Infiltrating Lymphocytes Using Deep Learning on Pathology Images

    Get PDF
    Beyond sample curation and basic pathologic characterization, the digitized H&E-stained images of TCGA samples remain underutilized. To highlight this resource, we present mappings of tumorinfiltrating lymphocytes (TILs) based on H&E images from 13 TCGA tumor types. These TIL maps are derived through computational staining using a convolutional neural network trained to classify patches of images. Affinity propagation revealed local spatial structure in TIL patterns and correlation with overall survival. TIL map structural patterns were grouped using standard histopathological parameters. These patterns are enriched in particular T cell subpopulations derived from molecular measures. TIL densities and spatial structure were differentially enriched among tumor types, immune subtypes, and tumor molecular subtypes, implying that spatial infiltrate state could reflect particular tumor cell aberration states. Obtaining spatial lymphocytic patterns linked to the rich genomic characterization of TCGA samples demonstrates one use for the TCGA image archives with insights into the tumor-immune microenvironment
    • 

    corecore