180 research outputs found
Six-minute walk distance after coronary artery bypass grafting compared with medical therapy in ischaemic cardiomyopathy
Background: In patients with ischaemic left ventricular dysfunction, coronary artery bypass surgery (CABG) may decrease mortality, but it is not known whether CABG improves functional capacity.
Objective: To determine whether CABG compared with medical therapy alone (MED) increases 6 min walk distance in patients with ischaemic left ventricular dysfunction and coronary artery disease amenable to revascularisation.
Methods: The Surgical Treatment in Ischemic Heart disease trial randomised 1212 patients with ischaemic left ventricular dysfunction to CABG or MED. A 6 min walk distance test was performed both at baseline and at least one follow-up assessment at 4, 12, 24 and/or 36 months in 409 patients randomised to CABG and 466 to MED. Change in 6 min walk distance between baseline and follow-up were compared by treatment allocation.
Results: 6 min walk distance at baseline for CABG was mean 340±117 m and for MED 339±118 m. Change in walk distance from baseline was similar for CABG and MED groups at 4 months (mean +38 vs +28 m), 12 months (+47 vs +36 m), 24 months (+31 vs +34 m) and 36 months (−7 vs +7 m), P>0.10 for all. Change in walk distance between CABG and MED groups over all assessments was also similar after adjusting for covariates and imputation for missing values (+8 m, 95% CI −7 to 23 m, P=0.29). Results were consistent for subgroups defined by angina, New York Heart Association class ≥3, left ventricular ejection fraction, baseline walk distance and geographic region.
Conclusion: In patients with ischaemic left ventricular dysfunction CABG compared with MED alone is known to reduce mortality but is unlikely to result in a clinically significant improvement in functional capacity
Coronary-artery bypass surgery in patients with ischemic cardiomyopathy
BACKGROUND
The survival benefit of a strategy of coronary-artery bypass grafting (CABG) added to
guideline-directed medical therapy, as compared with medical therapy alone, in patients
with coronary artery disease, heart failure, and severe left ventricular systolic
dysfunction remains unclear.
METHODS
From July 2002 to May 2007, a total of 1212 patients with an ejection fraction of 35%
or less and coronary artery disease amenable to CABG were randomly assigned to
undergo CABG plus medical therapy (CABG group, 610 patients) or medical therapy
alone (medical-therapy group, 602 patients). The primary outcome was death from any
cause. Major secondary outcomes included death from cardiovascular causes and death
from any cause or hospitalization for cardiovascular causes. The median duration of
follow-up, including the current extended-follow-up study, was 9.8 years.
RESULTS
A primary outcome event occurred in 359 patients (58.9%) in the CABG group and in
398 patients (66.1%) in the medical-therapy group (hazard ratio with CABG vs. medical
therapy, 0.84; 95% confidence interval [CI], 0.73 to 0.97; P=0.02 by log-rank test). A
total of 247 patients (40.5%) in the CABG group and 297 patients (49.3%) in the
medical-therapy group died from cardiovascular causes (hazard ratio, 0.79; 95% CI,
0.66 to 0.93; P=0.006 by log-rank test). Death from any cause or hospitalization for
cardiovascular causes occurred in 467 patients (76.6%) in the CABG group and in 524
patients (87.0%) in the medical-therapy group (hazard ratio, 0.72; 95% CI, 0.64 to 0.82;
P<0.001 by log-rank test).
CONCLUSIONS
In a cohort of patients with ischemic cardiomyopathy, the rates of death from any
cause, death from cardiovascular causes, and death from any cause or hospitalization
for cardiovascular causes were significantly lower over 10 years among patients who
underwent CABG in addition to receiving medical therapy than among those who received
medical therapy alone. (Funded by the National Institutes of Health; STICH [and
STICHES] ClinicalTrials.gov number, NCT00023595.
High Prevalence of Chronic Pituitary and Target-Organ Hormone Abnormalities after Blast-Related Mild Traumatic Brain Injury
Studies of traumatic brain injury from all causes have found evidence of chronic hypopituitarism, defined by deficient production of one or more pituitary hormones at least 1 year after injury, in 25–50% of cases. Most studies found the occurrence of posttraumatic hypopituitarism (PTHP) to be unrelated to injury severity. Growth hormone deficiency (GHD) and hypogonadism were reported most frequently. Hypopituitarism, and in particular adult GHD, is associated with symptoms that resemble those of PTSD, including fatigue, anxiety, depression, irritability, insomnia, sexual dysfunction, cognitive deficiencies, and decreased quality of life. However, the prevalence of PTHP after blast-related mild TBI (mTBI), an extremely common injury in modern military operations, has not been characterized. We measured concentrations of 12 pituitary and target-organ hormones in two groups of male US Veterans of combat in Iraq or Afghanistan. One group consisted of participants with blast-related mTBI whose last blast exposure was at least 1 year prior to the study. The other consisted of Veterans with similar military deployment histories but without blast exposure. Eleven of 26, or 42% of participants with blast concussions were found to have abnormal hormone levels in one or more pituitary axes, a prevalence similar to that found in other forms of TBI. Five members of the mTBI group were found with markedly low age-adjusted insulin-like growth factor-I (IGF-I) levels indicative of probable GHD, and three had testosterone and gonadotropin concentrations consistent with hypogonadism. If symptoms characteristic of both PTHP and PTSD can be linked to pituitary dysfunction, they may be amenable to treatment with hormone replacement. Routine screening for chronic hypopituitarism after blast concussion shows promise for appropriately directing diagnostic and therapeutic decisions that otherwise may remain unconsidered and for markedly facilitating recovery and rehabilitation
Global, regional, and national comparative risk assessment of 84 behavioural, environmental and occupational, and metabolic risks or clusters of risks for 195 countries and territories, 1990–2017 : a systematic analysis for the Global Burden of Disease Study 2017
Background: The Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2017 comparative risk assessment (CRA) is a comprehensive approach to risk factor quantification that offers a useful tool for synthesising evidence on risks and risk outcome associations. With each annual GBD study, we update the GBD CRA to incorporate improved methods, new risks and risk outcome pairs, and new data on risk exposure levels and risk outcome associations.
Methods: We used the CRA framework developed for previous iterations of GBD to estimate levels and trends in exposure, attributable deaths, and attributable disability-adjusted life-years (DALYs), by age group, sex, year, and location for 84 behavioural, environmental and occupational, and metabolic risks or groups of risks from 1990 to 2017. This study included 476 risk outcome pairs that met the GBD study criteria for convincing or probable evidence of causation. We extracted relative risk and exposure estimates from 46 749 randomised controlled trials, cohort studies, household surveys, census data, satellite data, and other sources. We used statistical models to pool data, adjust for bias, and incorporate covariates. Using the counterfactual scenario of theoretical minimum risk exposure level (TMREL), we estimated the portion of deaths and DALYs that could be attributed to a given risk. We explored the relationship between development and risk exposure by modelling the relationship between the Socio-demographic Index (SDI) and risk-weighted exposure prevalence and estimated expected levels of exposure and risk-attributable burden by SDI. Finally, we explored temporal changes in risk-attributable DALYs by decomposing those changes into six main component drivers of change as follows: (1) population growth; (2) changes in population age structures; (3) changes in exposure to environmental and occupational risks; (4) changes in exposure to behavioural risks; (5) changes in exposure to metabolic risks; and (6) changes due to all other factors, approximated as the risk-deleted death and DALY rates, where the risk-deleted rate is the rate that would be observed had we reduced the exposure levels to the TMREL for all risk factors included in GBD 2017.
Findings: In 2017,34.1 million (95% uncertainty interval [UI] 33.3-35.0) deaths and 121 billion (144-1.28) DALYs were attributable to GBD risk factors. Globally, 61.0% (59.6-62.4) of deaths and 48.3% (46.3-50.2) of DALYs were attributed to the GBD 2017 risk factors. When ranked by risk-attributable DALYs, high systolic blood pressure (SBP) was the leading risk factor, accounting for 10.4 million (9.39-11.5) deaths and 218 million (198-237) DALYs, followed by smoking (7.10 million [6.83-7.37] deaths and 182 million [173-193] DALYs), high fasting plasma glucose (6.53 million [5.23-8.23] deaths and 171 million [144-201] DALYs), high body-mass index (BMI; 4.72 million [2.99-6.70] deaths and 148 million [98.6-202] DALYs), and short gestation for birthweight (1.43 million [1.36-1.51] deaths and 139 million [131-147] DALYs). In total, risk-attributable DALYs declined by 4.9% (3.3-6.5) between 2007 and 2017. In the absence of demographic changes (ie, population growth and ageing), changes in risk exposure and risk-deleted DALYs would have led to a 23.5% decline in DALYs during that period. Conversely, in the absence of changes in risk exposure and risk-deleted DALYs, demographic changes would have led to an 18.6% increase in DALYs during that period. The ratios of observed risk exposure levels to exposure levels expected based on SDI (O/E ratios) increased globally for unsafe drinking water and household air pollution between 1990 and 2017. This result suggests that development is occurring more rapidly than are changes in the underlying risk structure in a population. Conversely, nearly universal declines in O/E ratios for smoking and alcohol use indicate that, for a given SDI, exposure to these risks is declining. In 2017, the leading Level 4 risk factor for age-standardised DALY rates was high SBP in four super-regions: central Europe, eastern Europe, and central Asia; north Africa and Middle East; south Asia; and southeast Asia, east Asia, and Oceania. The leading risk factor in the high-income super-region was smoking, in Latin America and Caribbean was high BMI, and in sub-Saharan Africa was unsafe sex. O/E ratios for unsafe sex in sub-Saharan Africa were notably high, and those for alcohol use in north Africa and the Middle East were notably low.
Interpretation: By quantifying levels and trends in exposures to risk factors and the resulting disease burden, this assessment offers insight into where past policy and programme efforts might have been successful and highlights current priorities for public health action. Decreases in behavioural, environmental, and occupational risks have largely offset the effects of population growth and ageing, in relation to trends in absolute burden. Conversely, the combination of increasing metabolic risks and population ageing will probably continue to drive the increasing trends in non-communicable diseases at the global level, which presents both a public health challenge and opportunity. We see considerable spatiotemporal heterogeneity in levels of risk exposure and risk-attributable burden. Although levels of development underlie some of this heterogeneity, O/E ratios show risks for which countries are overperforming or underperforming relative to their level of development. As such, these ratios provide a benchmarking tool to help to focus local decision making. Our findings reinforce the importance of both risk exposure monitoring and epidemiological research to assess causal connections between risks and health outcomes, and they highlight the usefulness of the GBD study in synthesising data to draw comprehensive and robust conclusions that help to inform good policy and strategic health planning
Does prior coronary angioplasty affect outcomes of surgical coronary revascularization? Insights from the STICH trial
Background:
The STICH trial showed superiority of coronary artery bypass plus medical treatment (CABG) over medical treatment alone (MED) in patients with left ventricular ejection fraction (LVEF) ≤35%. In previous publications, percutaneous coronary intervention (PCI) prior to CABG was associated with worse prognosis.
Objectives:
The main purpose of this study was to analyse if prior PCI influenced outcomes in STICH.
Methods and results:
Patients in the STICH trial (n = 1212), followed for a median time of 9.8 years, were included in the present analyses. In the total population, 156 had a prior PCI (74 and 82, respectively, in the MED and CABG groups). In those with vs. without prior PCI, the adjusted hazard-ratios (aHRs) were 0.92 (95% CI = 0.74–1.15) for all-cause mortality, 0.85 (95% CI = 0.64–1.11) for CV mortality, and 1.43 (95% CI = 1.15–1.77) for CV hospitalization. In the group randomized to CABG without prior PCI, the aHRs were 0.82 (95% CI = 0.70–0.95) for all-cause mortality, 0.75 (95% CI = 0.62–0.90) for CV mortality and 0.67 (95% CI = 0.56–0.80) for CV hospitalization. In the group randomized to CABG with prior PCI, the aHRs were 0.76 (95% CI = 0.50–1.15) for all-cause mortality, 0.81 (95% CI = 0.49–1.36) for CV mortality and 0.61 (95% CI = 0.41–0.90) for CV hospitalization. There was no evidence of interaction between randomized treatment and prior PCI for any endpoint (all adjusted p > 0.05).
Conclusion:
In the STICH trial, prior PCI did not affect the outcomes of patients whether they were treated medically or surgically, and the superiority of CABG over MED remained unchanged regardless of prior PCI.
Clinical trial registration:
Clinicaltrials.gov; Identifier: NCT0002359
Medical therapy and outcomes in REVIVED-BCIS2 and STICHES: an individual patient data analysis.
Background and aims: In the Surgical Treatment for Ischaemic Heart Failure Trial Extension Study (STICHES), coronary artery bypass grafting (CABG) improved outcomes of patients with ischaemic left ventricular dysfunction receiving medical therapy, whereas in the Revascularization for Ischaemia Ventricular Dysfunction trial (REVIVED-BCIS2), percutaneous coronary intervention (PCI) did not. The aim of this study was to explore differences in outcomes of participants treated with medical therapy alone in STICHES vs. REVIVED-BCIS2 and to assess the incremental benefit of CABG or PCI. Methods: Pooled analysis of adjusted individual participant data from two multicentre randomized trials. All patients had left ventricular ejection fraction ≤35% and coronary artery disease and received medical therapy. Participants were randomized 1:1 to CABG (STICHES) or PCI (REVIVED-BCIS2). The primary outcome was the composite of all-cause death and hospitalization for heart failure over all available follow-up. Results: A total of 1912 participants (88% male, 76% white ethnicity) were included with 98.3% completeness of follow-up for the primary outcome. The median follow-up was 118 months in STICHES and 41 months in REVIVED-BCIS2. Those receiving medical therapy alone in REVIVED-BCIS2 had fewer primary outcome events than those receiving medical therapy alone in STICHES (adjusted hazard ratio 0.60, 95% confidence interval 0.48-0.74, P < .001). Patients receiving PCI in REVIVED-BCIS2 were less likely to experience a primary outcome event than those receiving CABG in STICHES. Adjusted outcomes of patients treated with CABG in STICHES were worse than those receiving medical therapy alone in REVIVED-BCIS2. Conclusions: Patients with ischaemic cardiomyopathy receiving medical therapy in REVIVED-BCIS2 had better outcomes than those in STICHES, with or without CABG surgery. Further trials comparing CABG, PCI, and medical therapy in this population are warranted
Deep learning-based polygenic risk analysis for Alzheimer’s disease prediction
Background: The polygenic nature of Alzheimer’s disease (AD) suggests that multiple variants jointly contribute to disease susceptibility. As an individual’s genetic variants are constant throughout life, evaluating the combined effects of multiple disease-associated genetic risks enables reliable AD risk prediction. Because of the complexity of genomic data, current statistical analyses cannot comprehensively capture the polygenic risk of AD, resulting in unsatisfactory disease risk prediction. However, deep learning methods, which capture nonlinearity within high-dimensional genomic data, may enable more accurate disease risk prediction and improve our understanding of AD etiology. Accordingly, we developed deep learning neural network models for modeling AD polygenic risk. Methods: We constructed neural network models to model AD polygenic risk and compared them with the widely used weighted polygenic risk score and lasso models. We conducted robust linear regression analysis to investigate the relationship between the AD polygenic risk derived from deep learning methods and AD endophenotypes (i.e., plasma biomarkers and individual cognitive performance). We stratified individuals by applying unsupervised clustering to the outputs from the hidden layers of the neural network model. Results: The deep learning models outperform other statistical models for modeling AD risk. Moreover, the polygenic risk derived from the deep learning models enables the identification of disease-associated biological pathways and the stratification of individuals according to distinct pathological mechanisms. Conclusion: Our results suggest that deep learning methods are effective for modeling the genetic risks of AD and other diseases, classifying disease risks, and uncovering disease mechanisms
Identifying healthy individuals with Alzheimer’s disease neuroimaging phenotypes in the UK Biobank
Spotting people with dementia early is challenging, but important to identify people for trials of treatment and prevention. We used brain scans of people with Alzheimer’s disease, the commonest type of dementia, and applied an artificial intelligence method to spot people with Alzheimer’s disease. We used this to find people in the Healthy UK Biobank study who might have early Alzheimer’s disease. The people we found had subtle changes in their memory and thinking to suggest they may have early disease, and we also found they had high blood pressure and smoked for longer. We have demonstrated an approach that could be used to select people at high risk of future dementia for clinical trials
Wytyczne dotycza̧ce diagnostyki i leczenia ostrych zespołów wieńcowych bez przetrwałego uniesienia odcinka ST
Israrci ST-segment yükselmesi belirtileri göstermeyen hastalarda Akut KoronerSendromlarin (AKS) tedavi kilavuzlari
- …
