135 research outputs found
p63 is an alternative p53 repressor in melanoma that confers chemoresistance and a poor prognosis.
The role of apoptosis in melanoma pathogenesis and chemoresistance is poorly characterized. Mutations in TP53 occur infrequently, yet the TP53 apoptotic pathway is often abrogated. This may result from alterations in TP53 family members, including the TP53 homologue TP63. Here we demonstrate that TP63 has an antiapoptotic role in melanoma and is responsible for mediating chemoresistance. Although p63 was not expressed in primary melanocytes, up-regulation of p63 mRNA and protein was observed in melanoma cell lines and clinical samples, providing the first evidence of significant p63 expression in this lineage. Upon genotoxic stress, endogenous p63 isoforms were stabilized in both nuclear and mitochondrial subcellular compartments. Our data provide evidence of a physiological interaction between p63 with p53 whereby translocation of p63 to the mitochondria occurred through a codependent process with p53, whereas accumulation of p53 in the nucleus was prevented by p63. Using RNA interference technology, both isoforms of p63 (TA and ΔNp63) were demonstrated to confer chemoresistance, revealing a novel oncogenic role for p63 in melanoma cells. Furthermore, expression of p63 in both primary and metastatic melanoma clinical samples significantly correlated with melanoma-specific deaths in these patients. Ultimately, these observations provide a possible explanation for abrogation of the p53-mediated apoptotic pathway in melanoma, implicating novel approaches aimed at sensitizing melanoma to therapeutic agents
Magnitude, temporal trends, and projections of the global prevalence of blindness and distance and near vision impairment: a systematic review and meta-analysis
Background: Global and regional prevalence estimates for blindness and vision impairment are important for the development of public health policies. We aimed to provide global estimates, trends, and projections of global blindness and vision impairment.
Methods: We did a systematic review and meta-analysis of population-based datasets relevant to global vision impairment and blindness that were published between 1980 and 2015. We fitted hierarchical models to estimate the prevalence (by age, country, and sex), in 2015, of mild visual impairment (presenting visual acuity worse than 6/12 to 6/18 inclusive), moderate to severe visual impairment (presenting visual acuity worse than 6/18 to 3/60 inclusive), blindness (presenting visual acuity worse than 3/60), and functional presbyopia (defined as presenting near vision worse than N6 or N8 at 40 cm when best-corrected distance visual acuity was better than 6/12).
Findings: Globally, of the 7·33 billion people alive in 2015, an estimated 36·0 million (80% uncertainty interval [UI] 12·9–65·4) were blind (crude prevalence 0·48%; 80% UI 0·17–0·87; 56% female), 216·6 million (80% UI 98·5–359·1) people had moderate to severe visual impairment (2·95%, 80% UI 1·34–4·89; 55% female), and 188·5 million (80% UI 64·5–350·2) had mild visual impairment (2·57%, 80% UI 0·88–4·77; 54% female). Functional presbyopia affected an estimated 1094·7 million (80% UI 581·1–1686·5) people aged 35 years and older, with 666·7 million (80% UI 364·9–997·6) being aged 50 years or older. The estimated number of blind people increased by 17·6%, from 30·6 million (80% UI 9·9–57·3) in 1990 to 36·0 million (80% UI 12·9–65·4) in 2015. This change was attributable to three factors, namely an increase because of population growth (38·4%), population ageing after accounting for population growth (34·6%), and reduction in age-specific prevalence (–36·7%). The number of people with moderate and severe visual impairment also increased, from 159·9 million (80% UI 68·3–270·0) in 1990 to 216·6 million (80% UI 98·5–359·1) in 2015.
Interpretation: There is an ongoing reduction in the age-standardised prevalence of blindness and visual impairment, yet the growth and ageing of the world’s population is causing a substantial increase in number of people affected. These observations, plus a very large contribution from uncorrected presbyopia, highlight the need to scale up vision impairment alleviation efforts at all levels
Activation and Deactivation of a Robust Immobilized Cp*Ir-Transfer Hydrogenation Catalyst: A Multielement in Situ X-ray Absorption Spectroscopy Study
A highly robust immobilized [Cp*IrCl2]2 precatalyst on Wang resin for transfer hydrogenation, which can be recycled up to 30 times, was studied using a novel combination of X-ray absorption spectroscopy (XAS) at Ir L3-edge, Cl K-edge, and K K-edge. These culminate in in situ XAS experiments that link structural changes of the Ir complex with its catalytic activity and its deactivation. Mercury poisoning and “hot filtration” experiments ruled out leached Ir as the active catalyst. Spectroscopic evidence indicates the exchange of one chloride ligand with an alkoxide to generate the active precatalyst. The exchange of the second chloride ligand, however, leads to a potassium alkoxide–iridate species as the deactivated form of this immobilized catalyst. These findings could be widely applicable to the many homogeneous transfer hydrogenation catalysts with Cp*IrCl substructure
Global causes of blindness and distance vision impairment 1990–2020: a systematic review and meta-analysis
Background: Contemporary data on causes of vision impairment and blindness form an important basis for recommendations in public health policies. Refreshment of the Global Vision Database with recently published data sources permitted modeling of cause of vision loss data from 1990 to 2015, further disaggregation by cause, and forecasts to 2020.
Methods: Published and unpublished population-based data on the causes of vision impairment and blindness from 1980 to 2015 were systematically analysed. A series of regression models were fit to estimate the proportion of moderate and severe vision impairment (MSVI; defined as presenting visual acuity <6/18 but ≥3/60 in the better eye) and blindness (presenting visual acuity <3/60 in the better eye) by cause by age, region, and year.
Findings: Among the projected global population with MSVI (216.6 million; 80% uncertainty intervals [UI] 98.5-359.1), in 2015 the leading causes thereof are uncorrected refractive error (116.3 million; UI 49.4-202.1), cataract (52.6 million; UI 18.2-109.6), age-related macular degeneration (AMD; 8.4 million; UI 0.9-29.5), glaucoma (4.0 million; UI 0.6-13.3) and diabetic retinopathy (2.6 million; UI 0.2-9.9). In 2015, the leading global causes of blindness were cataract (12.6 million; UI 3.4-28.7) followed by uncorrected refractive error (7.4 million; UI 2.4-14.8) and glaucoma (2.9 million; UI 0.4-9.9), while by 2020, these numbers affected are anticipated to rise to 13.4 million, 8.0 million and 3.2 million, respectively. Cataract and uncorrected refractive error combined contributed to 55% of blindness and 77% of MSVI in adults aged 50 years and older in 2015. World regions varied markedly in the causes of blindness, with a relatively low prevalence of cataract and a relatively high prevalence of AMD as causes for vision loss in the High-income subregions. Blindness due to cataract and diabetic retinopathy was more common among women, while blindness due to glaucoma and corneal opacity was more common among men, with no gender difference related to AMD.
Conclusions: The numbers of people affected by the common causes of vision loss have increased substantially as the population increases and ages. Preventable vision loss due to cataract and refractive error (reversible with surgery and spectacle correction respectively), continue to cause the majority of blindness and MSVI in adults aged 50+ years. A massive scale up of eye care provision to cope with the increasing numbers is needed if one is to address avoidable vision loss
Global, regional, and national comparative risk assessment of 84 behavioural, environmental and occupational, and metabolic risks or clusters of risks for 195 countries and territories, 1990–2017 : a systematic analysis for the Global Burden of Disease Study 2017
Background: The Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2017 comparative risk assessment (CRA) is a comprehensive approach to risk factor quantification that offers a useful tool for synthesising evidence on risks and risk outcome associations. With each annual GBD study, we update the GBD CRA to incorporate improved methods, new risks and risk outcome pairs, and new data on risk exposure levels and risk outcome associations.
Methods: We used the CRA framework developed for previous iterations of GBD to estimate levels and trends in exposure, attributable deaths, and attributable disability-adjusted life-years (DALYs), by age group, sex, year, and location for 84 behavioural, environmental and occupational, and metabolic risks or groups of risks from 1990 to 2017. This study included 476 risk outcome pairs that met the GBD study criteria for convincing or probable evidence of causation. We extracted relative risk and exposure estimates from 46 749 randomised controlled trials, cohort studies, household surveys, census data, satellite data, and other sources. We used statistical models to pool data, adjust for bias, and incorporate covariates. Using the counterfactual scenario of theoretical minimum risk exposure level (TMREL), we estimated the portion of deaths and DALYs that could be attributed to a given risk. We explored the relationship between development and risk exposure by modelling the relationship between the Socio-demographic Index (SDI) and risk-weighted exposure prevalence and estimated expected levels of exposure and risk-attributable burden by SDI. Finally, we explored temporal changes in risk-attributable DALYs by decomposing those changes into six main component drivers of change as follows: (1) population growth; (2) changes in population age structures; (3) changes in exposure to environmental and occupational risks; (4) changes in exposure to behavioural risks; (5) changes in exposure to metabolic risks; and (6) changes due to all other factors, approximated as the risk-deleted death and DALY rates, where the risk-deleted rate is the rate that would be observed had we reduced the exposure levels to the TMREL for all risk factors included in GBD 2017.
Findings: In 2017,34.1 million (95% uncertainty interval [UI] 33.3-35.0) deaths and 121 billion (144-1.28) DALYs were attributable to GBD risk factors. Globally, 61.0% (59.6-62.4) of deaths and 48.3% (46.3-50.2) of DALYs were attributed to the GBD 2017 risk factors. When ranked by risk-attributable DALYs, high systolic blood pressure (SBP) was the leading risk factor, accounting for 10.4 million (9.39-11.5) deaths and 218 million (198-237) DALYs, followed by smoking (7.10 million [6.83-7.37] deaths and 182 million [173-193] DALYs), high fasting plasma glucose (6.53 million [5.23-8.23] deaths and 171 million [144-201] DALYs), high body-mass index (BMI; 4.72 million [2.99-6.70] deaths and 148 million [98.6-202] DALYs), and short gestation for birthweight (1.43 million [1.36-1.51] deaths and 139 million [131-147] DALYs). In total, risk-attributable DALYs declined by 4.9% (3.3-6.5) between 2007 and 2017. In the absence of demographic changes (ie, population growth and ageing), changes in risk exposure and risk-deleted DALYs would have led to a 23.5% decline in DALYs during that period. Conversely, in the absence of changes in risk exposure and risk-deleted DALYs, demographic changes would have led to an 18.6% increase in DALYs during that period. The ratios of observed risk exposure levels to exposure levels expected based on SDI (O/E ratios) increased globally for unsafe drinking water and household air pollution between 1990 and 2017. This result suggests that development is occurring more rapidly than are changes in the underlying risk structure in a population. Conversely, nearly universal declines in O/E ratios for smoking and alcohol use indicate that, for a given SDI, exposure to these risks is declining. In 2017, the leading Level 4 risk factor for age-standardised DALY rates was high SBP in four super-regions: central Europe, eastern Europe, and central Asia; north Africa and Middle East; south Asia; and southeast Asia, east Asia, and Oceania. The leading risk factor in the high-income super-region was smoking, in Latin America and Caribbean was high BMI, and in sub-Saharan Africa was unsafe sex. O/E ratios for unsafe sex in sub-Saharan Africa were notably high, and those for alcohol use in north Africa and the Middle East were notably low.
Interpretation: By quantifying levels and trends in exposures to risk factors and the resulting disease burden, this assessment offers insight into where past policy and programme efforts might have been successful and highlights current priorities for public health action. Decreases in behavioural, environmental, and occupational risks have largely offset the effects of population growth and ageing, in relation to trends in absolute burden. Conversely, the combination of increasing metabolic risks and population ageing will probably continue to drive the increasing trends in non-communicable diseases at the global level, which presents both a public health challenge and opportunity. We see considerable spatiotemporal heterogeneity in levels of risk exposure and risk-attributable burden. Although levels of development underlie some of this heterogeneity, O/E ratios show risks for which countries are overperforming or underperforming relative to their level of development. As such, these ratios provide a benchmarking tool to help to focus local decision making. Our findings reinforce the importance of both risk exposure monitoring and epidemiological research to assess causal connections between risks and health outcomes, and they highlight the usefulness of the GBD study in synthesising data to draw comprehensive and robust conclusions that help to inform good policy and strategic health planning
Which clinical research questions are the most important?:Development and preliminary validation of the Australia & New Zealand Musculoskeletal (ANZMUSC) Clinical Trials Network Research Question Importance Tool (ANZMUSC-RQIT)
Background and aimsHigh quality clinical research that addresses important questions requires significant resources. In resource-constrained environments, projects will therefore need to be prioritized. The Australia and New Zealand Musculoskeletal (ANZMUSC) Clinical Trials Network aimed to develop a stakeholder-based, transparent, easily implementable tool that provides a score for the 'importance' of a research question which could be used to rank research projects in order of importance.MethodsUsing a mixed-methods, multi-stage approach that included a Delphi survey, consensus workshop, inter-rater reliability testing, validity testing and calibration using a discrete-choice methodology, the Research Question Importance Tool (ANZMUSC-RQIT) was developed. The tool incorporated broad stakeholder opinion, including consumers, at each stage and is designed for scoring by committee consensus.ResultsThe ANZMUSC-RQIT tool consists of 5 dimensions (compared to 6 dimensions for an earlier version of RQIT): (1) extent of stakeholder consensus, (2) social burden of health condition, (3) patient burden of health condition, (4) anticipated effectiveness of proposed intervention, and (5) extent to which health equity is addressed by the research. Each dimension is assessed by defining ordered levels of a relevant attribute and by assigning a score to each level. The scores for the dimensions are then summed to obtain an overall ANZMUSC-RQIT score, which represents the importance of the research question. The result is a score on an interval scale with an arbitrary unit, ranging from 0 (minimal importance) to 1000. The ANZMUSC-RQIT dimensions can be reliably ordered by committee consensus (ICC 0.73-0.93) and the overall score is positively associated with citation count (standardised regression coefficient 0.33, pConclusionWe propose that the ANZMUSC-RQIT is a useful tool for prioritising the importance of a research question
Greater recovery after critical illness (GRACE): a call to action to create a new roadmap for critical illness research
For decades, most critical care patients have survived hospitalisation, supporting increased attention on the long-term critical illness recovery. The term ‘Post-Intensive Care Syndrome’ was coined in 2012 to raise awareness of long-term impairment in physical, cognitive and/or mental health after critical illness. However, the incidence of these impairments has persisted over the past decade, reaching as high as 60% and remains a major public health problem.Aiming to set a research agenda to address evidence gaps in critical illness recovery over the next 10 years, we invited key international opinion leaders from diverse clinical and methodological backgrounds to a roundtable meeting in June 2024 to assess the progress of post-critical illness recovery research and outline a future research agenda to address the unmet needs of critical illness survivors over the next decade.An early outcome from the meeting was to conduct a thematic analysis of critical care recovery literature, which highlighted the need for effective expectation management, ongoing patient support and education throughout recovery, integration between inpatient and community care, caregiver support and opportunities to reconnect with the intensive care unit.Participants identified conceptual challenges concerning current terminology and scope, population heterogeneity and phenotyping, and outcome definitions. Methodological challenges were identified around study design, with a call to shift to contemporary trial designs, incorporating qualitative methods. Translation into clinical practice will require interdisciplinary engagement.The roundtable concluded that a roadmap should be developed to guide clinical and research efforts over the coming decade, with the aim of developing a precision recovery approach
Typical investigational medicinal products follow relatively uniform regulations in 10 European Clinical Research Infrastructures Network (ECRIN) countries
<p>Abstract</p> <p>Background</p> <p>In order to facilitate multinational clinical research, regulatory requirements need to become international and harmonised. The EU introduced the Directive 2001/20/EC in 2004, regulating investigational medicinal products in Europe.</p> <p>Methods</p> <p>We conducted a survey in order to identify the national regulatory requirements for major categories of clinical research in ten European Clinical Research Infrastructures Network (ECRIN) countries-Austria, Denmark, France, Germany, Hungary, Ireland, Italy, Spain, Sweden, and United Kingdom-covering approximately 70% of the EU population. Here we describe the results for regulatory requirements for typical investigational medicinal products, in the ten countries.</p> <p>Results</p> <p>Our results show that the ten countries have fairly harmonised definitions of typical investigational medicinal products. Clinical trials assessing typical investigational medicinal products require authorisation from a national competent authority in each of the countries surveyed. The opinion of the competent authorities is communicated to the trial sponsor within the same timelines, i.e., no more than 60 days, in all ten countries. The authority to which the application has to be sent to in the different countries is not fully harmonised.</p> <p>Conclusion</p> <p>The Directive 2001/20/EC defined the term 'investigational medicinal product' and all regulatory requirements described therein are applicable to investigational medicinal products. Our survey showed, however, that those requirements had been adopted in ten European countries, not for investigational medicinal products overall, but rather a narrower category which we term 'typical' investigational medicinal products. The result is partial EU harmonisation of requirements and a relatively navigable landscape for the sponsor regarding typical investigational medicinal products.</p
Recommended from our members
Effect of Hydrocortisone on Mortality and Organ Support in Patients With Severe COVID-19: The REMAP-CAP COVID-19 Corticosteroid Domain Randomized Clinical Trial.
Importance: Evidence regarding corticosteroid use for severe coronavirus disease 2019 (COVID-19) is limited. Objective: To determine whether hydrocortisone improves outcome for patients with severe COVID-19. Design, Setting, and Participants: An ongoing adaptive platform trial testing multiple interventions within multiple therapeutic domains, for example, antiviral agents, corticosteroids, or immunoglobulin. Between March 9 and June 17, 2020, 614 adult patients with suspected or confirmed COVID-19 were enrolled and randomized within at least 1 domain following admission to an intensive care unit (ICU) for respiratory or cardiovascular organ support at 121 sites in 8 countries. Of these, 403 were randomized to open-label interventions within the corticosteroid domain. The domain was halted after results from another trial were released. Follow-up ended August 12, 2020. Interventions: The corticosteroid domain randomized participants to a fixed 7-day course of intravenous hydrocortisone (50 mg or 100 mg every 6 hours) (n = 143), a shock-dependent course (50 mg every 6 hours when shock was clinically evident) (n = 152), or no hydrocortisone (n = 108). Main Outcomes and Measures: The primary end point was organ support-free days (days alive and free of ICU-based respiratory or cardiovascular support) within 21 days, where patients who died were assigned -1 day. The primary analysis was a bayesian cumulative logistic model that included all patients enrolled with severe COVID-19, adjusting for age, sex, site, region, time, assignment to interventions within other domains, and domain and intervention eligibility. Superiority was defined as the posterior probability of an odds ratio greater than 1 (threshold for trial conclusion of superiority >99%). Results: After excluding 19 participants who withdrew consent, there were 384 patients (mean age, 60 years; 29% female) randomized to the fixed-dose (n = 137), shock-dependent (n = 146), and no (n = 101) hydrocortisone groups; 379 (99%) completed the study and were included in the analysis. The mean age for the 3 groups ranged between 59.5 and 60.4 years; most patients were male (range, 70.6%-71.5%); mean body mass index ranged between 29.7 and 30.9; and patients receiving mechanical ventilation ranged between 50.0% and 63.5%. For the fixed-dose, shock-dependent, and no hydrocortisone groups, respectively, the median organ support-free days were 0 (IQR, -1 to 15), 0 (IQR, -1 to 13), and 0 (-1 to 11) days (composed of 30%, 26%, and 33% mortality rates and 11.5, 9.5, and 6 median organ support-free days among survivors). The median adjusted odds ratio and bayesian probability of superiority were 1.43 (95% credible interval, 0.91-2.27) and 93% for fixed-dose hydrocortisone, respectively, and were 1.22 (95% credible interval, 0.76-1.94) and 80% for shock-dependent hydrocortisone compared with no hydrocortisone. Serious adverse events were reported in 4 (3%), 5 (3%), and 1 (1%) patients in the fixed-dose, shock-dependent, and no hydrocortisone groups, respectively. Conclusions and Relevance: Among patients with severe COVID-19, treatment with a 7-day fixed-dose course of hydrocortisone or shock-dependent dosing of hydrocortisone, compared with no hydrocortisone, resulted in 93% and 80% probabilities of superiority with regard to the odds of improvement in organ support-free days within 21 days. However, the trial was stopped early and no treatment strategy met prespecified criteria for statistical superiority, precluding definitive conclusions. Trial Registration: ClinicalTrials.gov Identifier: NCT02735707
The impact of immediate breast reconstruction on the time to delivery of adjuvant therapy: the iBRA-2 study
Background: Immediate breast reconstruction (IBR) is routinely offered to improve quality-of-life for women requiring mastectomy, but there are concerns that more complex surgery may delay adjuvant oncological treatments and compromise long-term outcomes. High-quality evidence is lacking. The iBRA-2 study aimed to investigate the impact of IBR on time to adjuvant therapy. Methods: Consecutive women undergoing mastectomy ± IBR for breast cancer July–December, 2016 were included. Patient demographics, operative, oncological and complication data were collected. Time from last definitive cancer surgery to first adjuvant treatment for patients undergoing mastectomy ± IBR were compared and risk factors associated with delays explored. Results: A total of 2540 patients were recruited from 76 centres; 1008 (39.7%) underwent IBR (implant-only [n = 675, 26.6%]; pedicled flaps [n = 105,4.1%] and free-flaps [n = 228, 8.9%]). Complications requiring re-admission or re-operation were significantly more common in patients undergoing IBR than those receiving mastectomy. Adjuvant chemotherapy or radiotherapy was required by 1235 (48.6%) patients. No clinically significant differences were seen in time to adjuvant therapy between patient groups but major complications irrespective of surgery received were significantly associated with treatment delays. Conclusion: IBR does not result in clinically significant delays to adjuvant therapy, but post-operative complications are associated with treatment delays. Strategies to minimise complications, including careful patient selection, are required to improve outcomes for patients
- …
