2,183 research outputs found

    Dynamics of the formation of a hydrogel by a pathogenic amyloid peptide: islet amyloid polypeptide

    Get PDF
    Many chronic degenerative diseases result from aggregation of misfolded polypeptides to form amyloids. Many amyloidogenic polypeptides are surfactants and their assembly can be catalysed by hydrophobic-hydrophilic interfaces (an air-water interface in-vitro or membranes in-vivo). We recently demonstrated the specificity of surface-induced amyloidogenesis but the mechanisms of amyloidogenesis and more specifically of adsorption at hydrophobic-hydrophilic interfaces remain poorly understood. Thus, it is critical to determine how amyloidogenic polypeptides behave at interfaces. Here we used surface tensiometry, rheology and electron microscopy to demonstrate the complex dynamics of gelation by full-length human islet amyloid polypeptide (involved in type II diabetes) both in the bulk solution and at hydrophobic-hydrophilic interfaces (air-water interface and phospholipids). We show that the hydrogel consists of a 3D supramolecular network of fibrils. We also assessed the role of solvation and dissected the evolution over time of the assembly processes. Amyloid gelation could have important pathological consequences for membrane integrity and cellular functions

    Interplay between chromosomal architecture and termination of DNA replication in bacteria

    Get PDF
    Copyright © 2023 Goodall, Warecka, Hawkins and Rudolph. Faithful transmission of the genome from one generation to the next is key to life in all cellular organisms. In the majority of bacteria, the genome is comprised of a single circular chromosome that is normally replicated from a single origin, though additional genetic information may be encoded within much smaller extrachromosomal elements called plasmids. By contrast, the genome of a eukaryote is distributed across multiple linear chromosomes, each of which is replicated from multiple origins. The genomes of archaeal species are circular, but are predominantly replicated from multiple origins. In all three cases, replication is bidirectional and terminates when converging replication fork complexes merge and ‘fuse’ as replication of the chromosomal DNA is completed. While the mechanics of replication initiation are quite well understood, exactly what happens during termination is far from clear, although studies in bacterial and eukaryotic models over recent years have started to provide some insight. Bacterial models with a circular chromosome and a single bidirectional origin offer the distinct advantage that there is normally just one fusion event between two replication fork complexes as synthesis terminates. Moreover, whereas termination of replication appears to happen in many bacteria wherever forks happen to meet, termination in some bacterial species, including the well-studied bacteria Escherichia coli and Bacillus subtilis, is more restrictive and confined to a ‘replication fork trap’ region, making termination even more tractable. This region is defined by multiple genomic terminator (ter) sites, which, if bound by specific terminator proteins, form unidirectional fork barriers. In this review we discuss a range of experimental results highlighting how the fork fusion process can trigger significant pathologies that interfere with the successful conclusion of DNA replication, how these pathologies might be resolved in bacteria without a fork trap system and how the acquisition of a fork trap might have provided an alternative and cleaner solution, thus explaining why in bacterial species that have acquired a fork trap system, this system is remarkably well maintained. Finally, we consider how eukaryotic cells can cope with a much-increased number of termination events.The work was supported by Research Grant BB/N014995/1 from the Biotechnology and Biological Sciences Research Council to CR and MH, and Research Grant BB/W000393/1 from the Biotechnology and Biological Sciences Research Council to CR

    Measuring patient-perceived quality of care in US hospitals using Twitter

    Get PDF
    BACKGROUND: Patients routinely use Twitter to share feedback about their experience receiving healthcare. Identifying and analysing the content of posts sent to hospitals may provide a novel real-time measure of quality, supplementing traditional, survey-based approaches. OBJECTIVE: To assess the use of Twitter as a supplemental data stream for measuring patient-perceived quality of care in US hospitals and compare patient sentiments about hospitals with established quality measures. DESIGN: 404 065 tweets directed to 2349 US hospitals over a 1-year period were classified as having to do with patient experience using a machine learning approach. Sentiment was calculated for these tweets using natural language processing. 11 602 tweets were manually categorised into patient experience topics. Finally, hospitals with ≥50 patient experience tweets were surveyed to understand how they use Twitter to interact with patients. KEY RESULTS: Roughly half of the hospitals in the US have a presence on Twitter. Of the tweets directed toward these hospitals, 34 725 (9.4%) were related to patient experience and covered diverse topics. Analyses limited to hospitals with ≥50 patient experience tweets revealed that they were more active on Twitter, more likely to be below the national median of Medicare patients (p<0.001) and above the national median for nurse/patient ratio (p=0.006), and to be a non-profit hospital (p<0.001). After adjusting for hospital characteristics, we found that Twitter sentiment was not associated with Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) ratings (but having a Twitter account was), although there was a weak association with 30-day hospital readmission rates (p=0.003). CONCLUSIONS: Tweets describing patient experiences in hospitals cover a wide range of patient care aspects and can be identified using automated approaches. These tweets represent a potentially untapped indicator of quality and may be valuable to patients, researchers, policy makers and hospital administrators

    Introducing structured caregiver training in stroke care: findings from the TRACS process evaluation study

    Get PDF
    Objective: To evaluate the process of implementation of the modified London Stroke Carers Training Course (LSCTC) in the Training Caregivers After Stroke (TRACS) cluster randomised trial and contribute to the interpretation of the TRACS trial results. The LSCTC was a structured competency-based training programme designed to help develop the knowledge and skills (eg, patient handling or transfer skills) essential for the day-to-day management of disabled survivors of stroke. The LSCTC comprised 14 components, 6 were mandatory (and delivered to all) and 8 non-mandatory, to be delivered based on individual assessment of caregiver need. Design: Process evaluation using non-participant observation, documentary analysis and semistructured interviews. Participants: Patients with stroke (n=38), caregivers (n=38), stroke unit staff (n=53). Settings: 10 of the 36 stroke units participating in the TRACS trial in four English regions (Yorkshire, North West, South East and South West, Peninsula). Results: Preparatory cascade training on delivery of the LSCTC did not reach all staff and did not lead to multidisciplinary team (MDT) wide understanding of, engagement with or commitment to the LSCTC. Although senior therapists in most intervention units observed developed ownership of the LSCTC, MDT working led to separation rather than integration of delivery of LSCTC elements. Organisational features of stroke units and professionals’ patient-focused practices limited the involvement of caregivers. Caregivers were often invited to observe therapy or care being provided by professionals but had few opportunities to make sense of, or to develop knowledge and stroke-specific skills provided by the LSCTC. Where provided, caregiver training came very late in the inpatient stay. Assessment and development of caregiver competence was not commonly observed. Conclusions: Contextual factors including service improvement pressures and staff perceptions of the necessity for and work required in caregiver training impacted negatively on implementation of the caregiver training intervention. Structured caregiver training programmes such as the LSCTC are unlikely to be practical in settings with short inpatient stays. Stroke units where early supported discharge is in place potentially offer a more effective vehicle for introducing competency based caregiver training

    Cost-effectiveness of alternative changes to a national blood collection service.

    Get PDF
    OBJECTIVES: To evaluate the cost-effectiveness of changing opening times, introducing a donor health report and reducing the minimum inter-donation interval for donors attending static centres. BACKGROUND: Evidence is required about the effect of changes to the blood collection service on costs and the frequency of donation. METHODS/MATERIALS: This study estimated the effect of changes to the blood collection service in England on the annual number of whole-blood donations by current donors. We used donors' responses to a stated preference survey, donor registry data on donation frequency and deferral rates from the INTERVAL trial. Costs measured were those anticipated to differ between strategies. We reported the cost per additional unit of blood collected for each strategy versus current practice. Strategies with a cost per additional unit of whole blood less than £30 (an estimate of the current cost of collection) were judged likely to be cost-effective. RESULTS: In static donor centres, extending opening times to evenings and weekends provided an additional unit of whole blood at a cost of £23 and £29, respectively. Introducing a health report cost £130 per additional unit of blood collected. Although the strategy of reducing the minimum inter-donation interval had the lowest cost per additional unit of blood collected (£10), this increased the rate of deferrals due to low haemoglobin (Hb). CONCLUSION: The introduction of a donor health report is unlikely to provide a sufficient increase in donation frequency to justify the additional costs. A more cost-effective change is to extend opening hours for blood collection at static centres

    Cerebral blood flow predicts differential neurotransmitter activity

    Get PDF
    Application of metabolic magnetic resonance imaging measures such as cerebral blood flow in translational medicine is limited by the unknown link of observed alterations to specific neurophysiological processes. In particular, the sensitivity of cerebral blood flow to activity changes in specific neurotransmitter systems remains unclear. We address this question by probing cerebral blood flow in healthy volunteers using seven established drugs with known dopaminergic, serotonergic, glutamatergic and GABAergic mechanisms of action. We use a novel framework aimed at disentangling the observed effects to contribution from underlying neurotransmitter systems. We find for all evaluated compounds a reliable spatial link of respective cerebral blood flow changes with underlying neurotransmitter receptor densities corresponding to their primary mechanisms of action. The strength of these associations with receptor density is mediated by respective drug affinities. These findings suggest that cerebral blood flow is a sensitive brain-wide in-vivo assay of metabolic demands across a variety of neurotransmitter systems in humans

    Risk of Cerebrovascular Events in 178 962 Five-Year Survivors of Cancer Diagnosed at 15 to 39 Years of Age: The TYACSS (Teenage and Young Adult Cancer Survivor Study)

    Get PDF
    Background: Survivors of teenage and young adult (TYA) cancer are at risk of cerebrovascular events, but the magnitude of and extent to which this risk varies by cancer type, decade of diagnosis, age at diagnosis and attained age remains uncertain. This is the largest ever cohort study to evaluate the risks of hospitalisation for a cerebrovascular event among long-term survivors of TYA cancer. Methods:The population-based Teenage and Young Adult Cancer Survivor Study (N=178,962) was linked to Hospital Episode Statistics data for England to investigate the risks of hospitalisation for a cerebrovascular event among 5-year survivors of cancer diagnosed when aged 15-39 years. Observed numbers of first hospitalisations for cerebrovascular events were compared to that expected from the general population using standardised hospitalisation ratios (SHR) and absolute excess risks (AER) per 10,000 person-years. Cumulative incidence was calculated with death considered a competing risk. Results: Overall, 2,782 cancer survivors were hospitalised for a cerebrovascular event—40% higher than expected (SHR=1.4, 95% confidence interval [CI]=1.3-1.4). Survivors of central nervous system (CNS) tumours (SHR=4.6, CI=4.3-5.0), head & neck tumours (SHR=2.6, CI=2.2-3.1) and leukaemia (SHR=2.5, CI=1.9-3.1) were at greatest risk. Males had a significantly higher AER than females (AER=7 versus 3), especially among head & neck tumour survivors (AER=30 versus 11). By age 60, 9%, 6% and 5% of CNS tumour, head & neck tumour, and leukaemia survivors, respectively, had been hospitalised for a cerebrovascular event. Beyond age 60, every year 0.4% of CNS tumour survivors were hospitalised for a cerebral infarction (versus 0.1% expected. Whereas at any age, every year 0.2% of head & neck tumour survivors were hospitalised for a cerebral infarction 7 (versus 0.06% expected). Conclusions: Survivors of a CNS tumour, head & neck tumour, and leukaemia are particularly at risk of hospitalisation for a cerebrovascular event. The excess risk of cerebral infarction among CNS tumour survivors increases with attained age. For head & neck tumour survivors this excess risk remains high across all ages. These groups of survivors, and in particular males, should be considered for surveillance of cerebrovascular risk factors and potential pharmacological interventions for cerebral infarction prevention

    Changes to population-based emergence of climate change from CMIP5 to CMIP6

    Get PDF
    Abstract The Coupled Model Intercomparison Project Phase 6 (CMIP6) model ensemble projects climate change emerging soonest and most strongly at low latitudes, regardless of the emissions pathway taken. In terms of signal-to-noise (S/N) ratios of average annual temperatures, these models project earlier and stronger emergence under the Shared Socio-economic Pathways than the previous generation did under corresponding Representative Concentration Pathways. Spatial patterns of emergence also change between generations of models; under a high emissions scenario, mid-century S/N is lower than previous studies indicated in Central Africa, South Asia, and parts of South America, West Africa, East Asia, and Western Europe, but higher in most other populated areas. We show that these global and regional changes are caused by a combination of higher effective climate sensitivity in the CMIP6 ensemble, as well as changes to emissions pathways, component-wise effective radiative forcing, and region-scale climate responses between model generations. We also present the first population-weighted calculation of climate change emergence for the CMIP6 ensemble, quantifying the number of people exposed to increasing degrees of abnormal temperatures now and into the future. Our results confirm the expected inequity of climate change-related impacts in the decades between now and the 2050 target for net-zero emissions held by many countries. These findings underscore the importance of concurrent investments in both mitigation and adaptation.</jats:p

    Prospective comparison of novel dark blood late gadolinium enhancement with conventional bright blood imaging for the detection of scar

    Get PDF
    BACKGROUND: Conventional bright blood late gadolinium enhancement (bright blood LGE) imaging is a routine cardiovascular magnetic resonance (CMR) technique offering excellent contrast between areas of LGE and normal myocardium. However, contrast between LGE and blood is frequently poor. Dark blood LGE (DB LGE) employs an inversion recovery T2 preparation to suppress the blood pool, thereby increasing the contrast between the endocardium and blood. The objective of this study is to compare the diagnostic utility of a novel DB phase sensitive inversion recovery (PSIR) LGE CMR sequence to standard bright blood PSIR LGE. METHODS: One hundred seventy-two patients referred for clinical CMR were scanned. A full left ventricle short axis stack was performed using both techniques, varying which was performed first in a 1:1 ratio. Two experienced observers analyzed all bright blood LGE and DB LGE stacks, which were randomized and anonymized. A scoring system was devised to quantify the presence and extent of gadolinium enhancement and the confidence with which the diagnosis could be made. RESULTS: A total of 2752 LV segments were analyzed. There was very good inter-observer correlation for quantifying LGE. DB LGE analysis found 41.5% more segments that exhibited hyperenhancement in comparison to bright blood LGE (248/2752 segments (9.0%) positive for LGE with bright blood; 351/2752 segments (12.8%) positive for LGE with DB; p < 0.05). DB LGE also allowed observers to be more confident when diagnosing LGE (bright blood LGE high confidence in 154/248 regions (62.1%); DB LGE in 275/324 (84.9%) regions (p < 0.05)). Eighteen patients with no bright blood LGE were found to have had DB LGE, 15 of whom had no known history of myocardial infarction. CONCLUSIONS: DB LGE significantly increases LGE detection compared to standard bright blood LGE. It also increases observer confidence, particularly for subendocardial LGE, which may have important clinical implications
    corecore