67 research outputs found

    The Curious Case of HU Aquarii - Dynamically Testing Proposed Planetary Systems

    Get PDF
    In early 2011, the discovery of two planets moving on surprisingly extreme orbits around the eclipsing polar cataclysmic variable system HU Aquraii was announced based on variations in the timing of mutual eclipses between the two central stars. We perform a detailed dynamical analysis of the stability of the exoplanet system as proposed in that work, revealing that it is simply dynamically unfeasible. We then apply the latest rigorous methods used by the Anglo-Australian Planet Search to analyse radial velocity data to re-examine the data used to make the initial claim. Using that data, we arrive at a significantly different orbital solution for the proposed planets, which we then show through dynamical analysis to be equally unfeasible. Finally, we discuss the need for caution in linking eclipse-timing data for cataclysmic variables to the presence of planets, and suggest a more likely explanation for the observed signal.Comment: 14 pages, 5 figures, 2 table

    Digital methodology to implement the ECOUTER engagement process

    Get PDF
    ECOUTER (Employing COnceptUal schema for policy and Translation Engagement in Research) – French for ‘to listen’ – is a new stakeholder engagement method incorporating existing evidence to help participants draw upon their own knowledge of cognate issues and interact on a topic of shared concern. The results of an ECOUTER can form the basis of recommendations for research, governance, practice and/or policy. This paper describes the development of a digital methodology for the ECOUTER engagement process based on currently available mind mapping freeware software. The implementation of an ECOUTER process tailored to applications within health studies are outlined for both online and face-to-face scenarios. Limitations of the present digital methodology are discussed, highlighting the requirement of a purpose built software for ECOUTER research purposes

    Generation of a cleaned dataset listing Avon Longitudinal Study of Parents And Children peer-reviewed publications to 2015 [version 1; referees: 2 approved]

    Get PDF
    Birth cohort studies generate huge amounts of data, and as a consequence are a source of many peer reviewed publications. We have taken the list of publications from the Avon Longitudinal Study of Parents and Children UK birth cohort, filtered, de-duplicated and cleaned it to generate a bibliographic research data set. This dataset could be used for accurate reporting and monitoring of the impact of the study as well as bibliometric research

    Privacy protected text analysis in DataSHIELD

    Get PDF
    ABSTRACT Objectives DataSHIELD (www.datashield.ac.uk) was born of the requirement in the biomedical and social sciences to co-analyse individual patient data (microdata) from different sources, without disclosing identity or sensitive information. Under DataSHIELD, raw data never leaves the data provider and no microdata or disclosive information can be seen by the researcher. The analysis is taken to the data - not the data to the analysis. Text data can be very disclosive in the biomedical domain (patient records, GP letters etc). Similar, but different, issues are present in other domains - text could be copyrighted, or have a large IP value, making sharing impractical. Approach By treating text in an analogous way to individual patient data we assessed if DataSHIELD could be adapted and implemented for text analysis, and circumvent the key obstacles that currently prevent it. Results Using open digitised text data held by the British Library, a DataSHIELD proof-of-concept infrastructure and prototype DataSHIELD functions for free text analysis were developed. Conclusions Whilst it is possible to analyse free text within a DataSHIELD infrastructure, the challenge is creating generalised and resilient anti-disclosure methods for free text analysis. There are a range of biomedical and health sciences applications for DataSHIELD methods of privacy protected analysis of free text including analysis of electronic health records and analysis of qualitative data e.g. from social media

    DataSHIELD – new directions and dimensions

    Get PDF
    In disciplines such as biomedicine and social sciences, sharing and combining sensitive individual-level data is often prohibited by ethical-legal or governance constraints and other barriers such as the control of intellectual property or the huge sample sizes. DataSHIELD (Data Aggregation Through Anonymous Summary-statistics from Harmonised Individual-levEL Databases) is a distributed approach that allows the analysis of sensitive individual-level data from one study, and the co-analysis of such data from several studies simultaneously without physically pooling them or disclosing any data. Following initial proof of principle, a stable DataSHIELD platform has now been implemented in a number of epidemiological consortia. This paper reports three new applications of DataSHIELD including application to post-publication sensitive data analysis, text data analysis and privacy protected data visualisation. Expansion of DataSHIELD analytic functionality and application to additional data types demonstrate the broad applications of the software beyond biomedical sciences

    DataSHIELD: Taking the analysis to the data, not the data to the analysis

    Get PDF
    \ua9 The Author 2014; all rights reserved. Background: Research in modern biomedicine and social science requires sample sizes so large that they can often only be achieved through a pooled co-analysis of data from several studies. But the pooling of information from individuals in a central database that may be queried by researchers raises important ethico-legal questions and can be controversial. In the UK this has been highlighted by recent debate and controversy relating to the UK\u27s proposed \u27care.data\u27 initiative, and these issues reflect important societal and professional concerns about privacy, confidentiality and intellectual property. DataSHIELD provides a novel technological solution that can circumvent some of the most basic challenges in facilitating the access of researchers and other healthcare professionals to individual-level data. Methods: Commands are sent from a central analysis computer (AC) to several data computers (DCs) storing the data to be co-analysed. The data sets are analysed simultaneously but in parallel. The separate parallelized analyses are linked by non-disclosive summary statistics and commands transmitted back and forth between the DCs and the AC. This paper describes the technical implementation of DataSHIELD using a modified R statistical environment linked to an Opal database deployed behind the computer firewall of each DC. Analysis is controlled through a standard R environment at the AC. Results: Based on this Opal/R implementation, DataSHIELD is currently used by the Healthy Obese Project and the Environmental Core Project (BioSHaRE-EU) for the federated analysis of 10 data sets across eight European countries, and this illustrates the opportunities and challenges presented by the DataSHIELD approach. Conclusions: DataSHIELD facilitates important research in settings where: (i) a co-analysis of individual-level data from several studies is scientifically necessary but governance restrictions prohibit the release or sharing of some of the required data, and/or render data access unacceptably slow; (ii) a research group (e.g. in a developing nation) is particularly vulnerable to loss of intellectual property-the researchers want to fully share the information held in their data with national and international collaborators, but do not wish to hand over the physical data themselves; and (iii) a data set is to be included in an individual-level co-analysis but the physical size of the data precludes direct transfer to a new site for analysis

    DataSHIELD: taking the analysis to the data, not the data to the analysis

    Get PDF
    Research in modern biomedicine and social science requires sample sizes so large that they can often only be achieved through a pooled co-analysis of data from several studies. But the pooling of information from individuals in a central database that may be queried by researchers raises important ethico-legal questions and can be controversial. In the UK this has been highlighted by recent debate and controversy relating to the UK's proposed 'care.data' initiative, and these issues reflect important societal and professional concerns about privacy, confidentiality and intellectual property. DataSHIELD provides a novel technological solution that can circumvent some of the most basic challenges in facilitating the access of researchers and other healthcare professionals to individual-level data. Commands are sent from a central analysis computer (AC) to several data computers (DCs) storing the data to be co-analysed. The data sets are analysed simultaneously but in parallel. The separate parallelized analyses are linked by non-disclosive summary statistics and commands transmitted back and forth between the DCs and the AC. This paper describes the technical implementation of DataSHIELD using a modified R statistical environment linked to an Opal database deployed behind the computer firewall of each DC. Analysis is controlled through a standard R environment at the AC. Based on this Opal/R implementation, DataSHIELD is currently used by the Healthy Obese Project and the Environmental Core Project (BioSHaRE-EU) for the federated analysis of 10 data sets across eight European countries, and this illustrates the opportunities and challenges presented by the DataSHIELD approach. DataSHIELD facilitates important research in settings where: (i) a co-analysis of individual-level data from several studies is scientifically necessary but governance restrictions prohibit the release or sharing of some of the required data, and/or render data access unacceptably slow; (ii) a research group (e.g. in a developing nation) is particularly vulnerable to loss of intellectual property-the researchers want to fully share the information held in their data with national and international collaborators, but do not wish to hand over the physical data themselves; and (iii) a data set is to be included in an individual-level co-analysis but the physical size of the data precludes direct transfer to a new site for analysis

    The ECOUTER methodology for stakeholder engagement in translational research.

    Get PDF
    BACKGROUND: Because no single person or group holds knowledge about all aspects of research, mechanisms are needed to support knowledge exchange and engagement. Expertise in the research setting necessarily includes scientific and methodological expertise, but also expertise gained through the experience of participating in research and/or being a recipient of research outcomes (as a patient or member of the public). Engagement is, by its nature, reciprocal and relational: the process of engaging research participants, patients, citizens and others (the many 'publics' of engagement) brings them closer to the research but also brings the research closer to them. When translating research into practice, engaging the public and other stakeholders is explicitly intended to make the outcomes of translation relevant to its constituency of users. METHODS: In practice, engagement faces numerous challenges and is often time-consuming, expensive and 'thorny' work. We explore the epistemic and ontological considerations and implications of four common critiques of engagement methodologies that contest: representativeness, communication and articulation, impacts and outcome, and democracy. The ECOUTER (Employing COnceptUal schema for policy and Translation Engagement in Research) methodology addresses problems of representation and epistemic foundationalism using a methodology that asks, "How could it be otherwise?" ECOUTER affords the possibility of engagement where spatial and temporal constraints are present, relying on saturation as a method of 'keeping open' the possible considerations that might emerge and including reflexive use of qualitative analytic methods. RESULTS: This paper describes the ECOUTER process, focusing on one worked example and detailing lessons learned from four other pilots. ECOUTER uses mind-mapping techniques to 'open up' engagement, iteratively and organically. ECOUTER aims to balance the breadth, accessibility and user-determination of the scope of engagement. An ECOUTER exercise comprises four stages: (1) engagement and knowledge exchange; (2) analysis of mindmap contributions; (3) development of a conceptual schema (i.e. a map of concepts and their relationship); and (4) feedback, refinement and development of recommendations. CONCLUSION: ECOUTER refuses fixed truths but also refuses a fixed nature. Its promise lies in its flexibility, adaptability and openness. ECOUTER will be formed and re-formed by the needs and creativity of those who use it

    Quantitative 18F-AV1451 Brain Tau PET Imaging in Cognitively Normal Older Adults, Mild Cognitive Impairment, and Alzheimer's Disease Patients

    Get PDF
    Recent developments of tau Positron Emission Tomography (PET) allows assessment of regional neurofibrillary tangles (NFTs) deposition in human brain. Among the tau PET molecular probes, 18F-AV1451 is characterized by high selectivity for pathologic tau aggregates over amyloid plaques, limited non-specific binding in white and gray matter, and confined off-target binding. The objectives of the study are (1) to quantitatively characterize regional brain tau deposition measured by 18F-AV1451 PET in cognitively normal older adults (CN), mild cognitive impairment (MCI), and AD participants; (2) to evaluate the correlations between cerebrospinal fluid (CSF) biomarkers or Mini-Mental State Examination (MMSE) and 18F-AV1451 PET standardized uptake value ratio (SUVR); and (3) to evaluate the partial volume effects on 18F-AV1451 brain uptake.Methods: The study included total 115 participants (CN = 49, MCI = 58, and AD = 8) from the Alzheimer's Disease Neuroimaging Initiative (ADNI). Preprocessed 18F-AV1451 PET images, structural MRIs, and demographic and clinical assessments were downloaded from the ADNI database. A reblurred Van Cittertiteration method was used for voxelwise partial volume correction (PVC) on PET images. Structural MRIs were used for PET spatial normalization and region of interest (ROI) definition in standard space. The parametric images of 18F-AV1451 SUVR relative to cerebellum were calculated. The ROI SUVR measurements from PVC and non-PVC SUVR images were compared. The correlation between ROI 18F-AV1451 SUVR and the measurements of MMSE, CSF total tau (t-tau), and phosphorylated tau (p-tau) were also assessed.Results:18F-AV1451 prominently specific binding was found in the amygdala, entorhinal cortex, parahippocampus, fusiform, posterior cingulate, temporal, parietal, and frontal brain regions. Most regional SUVRs showed significantly higher uptake of 18F-AV1451 in AD than MCI and CN participants. SUVRs of small regions like amygdala, entorhinal cortex and parahippocampus were statistically improved by PVC in all groups (p < 0.01). Although there was an increasing tendency of 18F-AV-1451 SUVRs in MCI group compared with CN group, no significant difference of 18F-AV1451 deposition was found between CN and MCI brains with or without PVC (p > 0.05). Declined MMSE score was observed with increasing 18F-AV1451 binding in amygdala, entorhinal cortex, parahippocampus, and fusiform. CSF p-tau was positively correlated with 18F-AV1451 deposition. PVC improved the results of 18F-AV-1451 tau deposition and correlation studies in small brain regions.Conclusion: The typical deposition of 18F-AV1451 tau PET imaging in AD brain was found in amygdala, entorhinal cortex, fusiform and parahippocampus, and these regions were strongly associated with cognitive impairment and CSF biomarkers. Although more deposition was observed in MCI group, the 18F-AV-1451 PET imaging could not differentiate the MCI patients from CN population. More tau deposition related to decreased MMSE score and increased level of CSF p-tau, especially in ROIs of amygdala, entorhinal cortex and parahippocampus. PVC did improve the results of tau deposition and correlation studies in small brain regions and suggest to be routinely used in 18F-AV1451 tau PET quantification

    Effect of alirocumab on mortality after acute coronary syndromes. An analysis of the ODYSSEY OUTCOMES randomized clinical trial

    Get PDF
    Background: Previous trials of PCSK9 (proprotein convertase subtilisin-kexin type 9) inhibitors demonstrated reductions in major adverse cardiovascular events, but not death. We assessed the effects of alirocumab on death after index acute coronary syndrome. Methods: ODYSSEY OUTCOMES (Evaluation of Cardiovascular Outcomes After an Acute Coronary Syndrome During Treatment With Alirocumab) was a double-blind, randomized comparison of alirocumab or placebo in 18 924 patients who had an ACS 1 to 12 months previously and elevated atherogenic lipoproteins despite intensive statin therapy. Alirocumab dose was blindly titrated to target achieved low-density lipoprotein cholesterol (LDL-C) between 25 and 50 mg/dL. We examined the effects of treatment on all-cause death and its components, cardiovascular and noncardiovascular death, with log-rank testing. Joint semiparametric models tested associations between nonfatal cardiovascular events and cardiovascular or noncardiovascular death. Results: Median follow-up was 2.8 years. Death occurred in 334 (3.5%) and 392 (4.1%) patients, respectively, in the alirocumab and placebo groups (hazard ratio [HR], 0.85; 95% CI, 0.73 to 0.98; P=0.03, nominal P value). This resulted from nonsignificantly fewer cardiovascular (240 [2.5%] vs 271 [2.9%]; HR, 0.88; 95% CI, 0.74 to 1.05; P=0.15) and noncardiovascular (94 [1.0%] vs 121 [1.3%]; HR, 0.77; 95% CI, 0.59 to 1.01; P=0.06) deaths with alirocumab. In a prespecified analysis of 8242 patients eligible for ≥3 years follow-up, alirocumab reduced death (HR, 0.78; 95% CI, 0.65 to 0.94; P=0.01). Patients with nonfatal cardiovascular events were at increased risk for cardiovascular and noncardiovascular deaths (P<0.0001 for the associations). Alirocumab reduced total nonfatal cardiovascular events (P<0.001) and thereby may have attenuated the number of cardiovascular and noncardiovascular deaths. A post hoc analysis found that, compared to patients with lower LDL-C, patients with baseline LDL-C ≥100 mg/dL (2.59 mmol/L) had a greater absolute risk of death and a larger mortality benefit from alirocumab (HR, 0.71; 95% CI, 0.56 to 0.90; Pinteraction=0.007). In the alirocumab group, all-cause death declined wit h achieved LDL-C at 4 months of treatment, to a level of approximately 30 mg/dL (adjusted P=0.017 for linear trend). Conclusions: Alirocumab added to intensive statin therapy has the potential to reduce death after acute coronary syndrome, particularly if treatment is maintained for ≥3 years, if baseline LDL-C is ≥100 mg/dL, or if achieved LDL-C is low. Clinical Trial Registration: URL: https://www.clinicaltrials.gov. Unique identifier: NCT01663402
    • …
    corecore