112 research outputs found

    Parenteral versus oral iron therapy for adults and children with chronic kidney disease

    Get PDF
    Background The anaemia seen in chronic kidney disease (CKD) may be exacerbated by iron deficiency. Iron can be provided through different routes, with advantages and drawbacks of each route. It remains unclear whether the potential harms and additional costs of intravenous (IV) compared with oral iron are justified. This is an update of a review first published in 2012. Objectives To determine the benefits and harms of IV iron supplementation compared with oral iron for anaemia in adults and children with CKD, including participants on dialysis, with kidney transplants and CKD not requiring dialysis. Search methods We searched the Cochrane Kidney and Transplant Register of Studies up to 7 December 2018 through contact with the Information Specialist using search terms relevant to this review. Studies in the Register are identified through searches of CENTRAL, MEDLINE, and EMBASE, conference proceedings, the International Clinical Trials Register (ICTRP) Search Portal, and ClinicalTrials.gov. Selection criteria We included randomised controlled trials (RCTs) and quasi‐RCTs in which IV and oral routes of iron administration were compared in adults and children with CKD. Data collection and analysis Two authors independently assessed study eligibility, risk of bias, and extracted data. Results were reported as risk ratios (RR) with 95% confidence intervals (CI) for dichotomous outcomes. For continuous outcomes the mean difference (MD) was used or standardised mean difference (SMD) if different scales had been used. Statistical analyses were performed using the random‐effects model. Subgroup analysis and univariate meta‐regression were performed to investigate between study differences. The certainty of the evidence was assessed using GRADE. Main results We included 39 studies (3852 participants), 11 of which were added in this update. A low risk of bias was attributed to 20 (51%) studies for sequence generation, 14 (36%) studies for allocation concealment, 22 (56%) studies for attrition bias and 20 (51%) for selective outcome reporting. All studies were at a high risk of performance bias. However, all studies were considered at low risk of detection bias because the primary outcome in all studies was laboratory‐based and unlikely to be influenced by lack of blinding. There is insufficient evidence to suggest that IV iron compared with oral iron makes any difference to death (all causes) (11 studies, 1952 participants: RR 1.12, 95% CI 0.64, 1.94) (absolute effect: 33 participants per 1000 with IV iron versus 31 per 1000 with oral iron), the number of participants needing to start dialysis (4 studies, 743 participants: RR 0.81, 95% CI 0.41, 1.61) or the number needing blood transfusions (5 studies, 774 participants: RR 0.86, 95% CI 0.55, 1.34) (absolute effect: 87 per 1,000 with IV iron versus 101 per 1,000 with oral iron). These analyses were assessed as having low certainty evidence. It is uncertain whether IV iron compared with oral iron reduces cardiovascular death because the certainty of this evidence was very low (3 studies, 206 participants: RR 1.71, 95% CI 0.41 to 7.18). Quality of life was reported in five studies with four reporting no difference between treatment groups and one reporting improvement in participants treated with IV iron. IV iron compared with oral iron may increase the numbers of participants, who experience allergic reactions or hypotension (15 studies, 2607 participants: RR 3.56, 95% CI 1.88 to 6.74) (absolute harm: 24 per 1000 with IV iron versus 7 per 1000) but may reduce the number of participants with all gastrointestinal adverse effects (14 studies, 1986 participants: RR 0.47, 95% CI 0.33 to 0.66) (absolute benefit: 150 per 1000 with IV iron versus 319 per 1000). These analyses were assessed as having low certainty evidence. IV iron compared with oral iron may increase the number of participants who achieve target haemoglobin (13 studies, 2206 participants: RR 1.71, 95% CI 1.43 to 2.04) (absolute benefit: 542 participants per 1,000 with IV iron versus 317 per 1000 with oral iron), increased haemoglobin (31 studies, 3373 participants: MD 0.72 g/dL, 95% CI 0.39 to 1.05); ferritin (33 studies, 3389 participants: MD 224.84 µg/L, 95% CI 165.85 to 283.83) and transferrin saturation (27 studies, 3089 participants: MD 7.69%, 95% CI 5.10 to 10.28), and may reduce the dose required of erythropoietin‐stimulating agents (ESAs) (11 studies, 522 participants: SMD ‐0.72, 95% CI ‐1.12 to ‐0.31) while making little or no difference to glomerular filtration rate (8 studies, 1052 participants: 0.83 mL/min, 95% CI ‐0.79 to 2.44). All analyses were assessed as having low certainty evidence. There were moderate to high degrees of heterogeneity in these analyses but in meta‐regression, definite reasons for this could not be determined. Authors' conclusions The included studies provide low certainty evidence that IV iron compared with oral iron increases haemoglobin, ferritin and transferrin levels in CKD participants, increases the number of participants who achieve target haemoglobin and reduces ESA requirements. However, there is insufficient evidence to determine whether IV iron compared with oral iron influences death (all causes), cardiovascular death and quality of life though most studies reported only short periods of follow‐up. Adverse effects were reported in only 50% of included studies. We therefore suggest that further studies that focus on patient‐centred outcomes with longer follow‐up periods are needed to determine if the use of IV iron is justified on the basis of reductions in ESA dose and cost, improvements in patient quality of life, and with few serious adverse effects

    Diagnosis and treatment of hyponatremia : a systematic review of clinical practice guidelines and consensus statements

    Get PDF
    Background: Hyponatremia is a common electrolyte disorder. Multiple organizations have published guidance documents to assist clinicians in managing hyponatremia. We aimed to explore the scope, content, and consistency of these documents. Methods: We searched MEDLINE, EMBASE, and websites of guideline organizations and professional societies to September 2014 without language restriction for Clinical Practice Guidelines (defined as any document providing guidance informed by systematic literature review) and Consensus Statements (any other guidance document) developed specifically to guide differential diagnosis or treatment of hyponatremia. Four reviewers appraised guideline quality using the 23-item AGREE II instrument, which rates reporting of the guidance development process across six domains: scope and purpose, stakeholder involvement, rigor of development, clarity of presentation, applicability, and editorial independence. Total scores were calculated as standardized averages by domain. Results: We found ten guidance documents; five clinical practice guidelines and five consensus statements. Overall, quality was mixed: two clinical practice guidelines attained an average score of > 50% for all of the domains, three rated the evidence in a systematic way and two graded strength of the recommendations. All five consensus statements received AGREE scores below 60% for each of the specific domains. The guidance documents varied widely in scope. All dealt with therapy and seven included recommendations on diagnosis, using serum osmolality to confirm hypotonic hyponatremia, and volume status, urinary sodium concentration, and urinary osmolality for further classification of the hyponatremia. They differed, however, in classification thresholds, what additional tests to consider, and when to initiate diagnostic work-up. Eight guidance documents advocated hypertonic NaCl in severely symptomatic, acute onset ( 48 h) or asymptomatic cases, recommended treatments were NaCl 0.9%, fluid restriction, and cause-specific therapy for hypovolemic, euvolemic, and hypervolemic hyponatremia, respectively. Eight guidance documents recommended limits for speed of increase of sodium concentration, but these varied between 8 and 12 mmol/L per 24 h. Inconsistencies also existed in the recommended dose of NaCl, its initial infusion speed, and which second line interventions to consider. Conclusions: Current guidance documents on the assessment and treatment of hyponatremia vary in methodological rigor and recommendations are not always consistent

    Effect of renin-angiotensin-aldosterone system blockade in adults with diabetes mellitus and advanced chronic kidney disease not on dialysis : a systematic review and meta-analysis

    Get PDF
    The presumed superiority of renin-angiotensin-aldosterone system (RAAS)-blocking agents over other antihypertensive agents in patients with diabetes to delay development of end-stage kidney disease (ESKD) has recently been challenged. In addition, there is ongoing uncertainty whether RAAS-blocking agents reduce mortality and/or delay ESKD in patients with diabetes and chronic kidney disease (CKD) stages 3-5. In this subgroup, there might be an expedited need for renal replacement therapy (RRT) when RAAS-blocking agents are used. We conducted a meta-analysis of randomized controlled trials (RCTs) of at least 6-months duration in adult patients with diabetes who also have non-dialysis CKD stages 3-5. RCTs comparing single RAAS-blocking agents to placebo or alternative antihypertensive agents were included. Outcomes of interest were allcause mortality, cardiovascular morbidity, progression of renal function, ESKD and adverse events. A total of nine trials (n = 9797 participants with CKD stages 3-5) fit our inclusion criteria. There was no difference between the RAAS group and control group regarding all-cause mortality {relative risk [RR] = 0.97 [95% confidence interval (CI) 0.85-1.10]}, cardiovascular mortality [RR = 1.03 (95% CI 0.75-1.41)] and adverse events [RR = 1.05 (95% CI 0.89-1.25)]. There was a trend for a favourable effect for non-fatal cardiovascular events [RR = 0.90 (95% CI 0.81-1.00)] and a lower risk of the composite endpoint need for RRT/doubling of serum creatinine [RR = 0.81 (95% CI 0.70-0.92)] in the RAAS-blocking agents group versus the control group. We found evidence that in patients with diabetes mellitus and CKD stages 3-5, treatment with RAAS-blocking agents did not result in a clear survival advantage. The effect on renal outcomes did depend on the selected outcome measure. However, we did not find evidence that the use of RAAS-blocking agents expedited the need for RRT in patients with CKD stages 3-5

    Considerations on glycaemic control in older and/or frail individuals with diabetes and advanced kidney disease

    Get PDF
    The increasing prevalence of chronic kidney disease (CKD) and diabetes over the last decade has resulted in increasing numbers of frail older patients with a combination of these conditions. Current treatment guidelines may not necessarily be relevant for such patients, who are mostly excluded from the trials upon which these recommendations are based. There is a paucity of data upon which to base the management of older patients with CKD. Nearly all current guidelines recommend less-tight glycaemic control for the older population, citing the lack of proven medium-term benefits and concerns about the high short-term risk of hypoglycaemia. However, reports from large landmark trials have shown potential benefits for both microvascular and macrovascular complications, though the relevance of these findings to this specific population is uncertain. The trials have also highlighted potential alternative explanations for the hazards of intensive glycaemic control. These include depression, low endogenous insulin reserve, low body mass index and side effects of the medication. Over the last few years, newer classes of hypoglycaemic drugs with a lower risk of hypoglycaemia have emerged. This article aims to present a balanced view of advantages and disadvantages of intense glycaemic control in this group of patients, which we hope will help the clinician and patient to come to an individualized management approach.Peer reviewe

    Management of obesity in kidney transplant candidates and recipients: A clinical practice guideline by the DESCARTES Working Group of ERA

    Get PDF
    The clinical practice guideline Management of Obesity in Kidney Transplant Candidates and Recipients was developed to guide decision-making in caring for people with end-stage kidney disease (ESKD) living with obesity. The document considers the challenges in defining obesity, weighs interventions for treating obesity in kidney transplant candidates as well as recipients and reflects on the impact of obesity on the likelihood of wait-listing as well as its effect on transplant outcomes. It was designed to inform management decisions related to this topic and provide the backdrop for shared decision-making. This guideline was developed by the European Renal Association's Developing Education Science and Care for Renal Transplantation in European States working group. The group was supplemented with selected methodologists to supervise the project and provide methodological expertise in guideline development throughout the process. The guideline targets any healthcare professional treating or caring for people with ESKD being considered for kidney transplantation or having received a donor kidney. This includes nephrologists, transplant physicians, transplant surgeons, general practitioners, dialysis and transplant nurses. Development of this guideline followed an explicit process of evidence review. Treatment approaches and guideline recommendations are based on systematic reviews of relevant studies and appraisal of the quality of the evidence and the strength of recommendations followed the Grading of Recommendations Assessment, Development and Evaluation approach. Limitations of the evidence are discussed and areas of future research are presented

    Results on the use of bacterial biopreparations (biological fertilizers) in agricultural crops in Research and Development Stations for Agriculture, Romania

    Get PDF
    The development and use of strategies for organic fertilization of agricultural crops will lead to a decomposition of insoluble elements in the soil structure, which will lead to an increase in the number of soluble mineral elements in the soil structure, which will lead to significant plant growth and of agricultural production. Soil pollution with chemical elements has led to an increase in pH from a basic / neutral to an acid one. Soil acidification causes a decrease in agricultural production, a decrease in plant resistance to certain pests of soil structure, but especially soil pollution, groundwater with certain chemical elements found in the structure of fertilizers. Thus, following the research carried out within the research-development stations for agriculture in Romania, it was proved that the lots fertilized with biological fertilizer had a much higher production than the lots fertilized chemically

    Economic advantages of using bacterial biopreparations in agricultural crops

    Get PDF
    The ecological, genetic, biological approach proposed by agricultural specialists in order to protect plants and crops has a role in reducing the impact of pests through the process of selection and improvement of genetic resources in the processes of planting, development and introduction of biological means to combat pests in agricultural ecosystems. The strategies proposed by the specialists in the agricultural field aim not at the total extermination of the pests from the agricultural crops but at keeping the pest populations at the optimal damage threshold. The most important advantages of these biological processes are those of the evolutionary stability of the crop systems, the ecological stabilization of the pest and crop populations as well as the assurance of a superior quality of the resulting agricultural products.The present paper aims to present the main advantages of using bacterial biopreparations in agricultural ecosystems (research conducted in agricultural research stations in Romania), reducing soil pollution, environmental crops, use of alternative fertilization and cultivation technologies as well as obtaining additional, ecological productions.The aim of this paper is to present the economic advantages of using bacterial biopreparations in agricultural research and development stations, reducing costs in agriculture and the processes that these bacterial biopreparations have on the agricultural ecosystem, the environment and humans and animals

    A spatially explicit database of wind disturbances in European forests over the period 2000-2018

    Get PDF
    Strong winds may uproot and break trees and represent a major natural disturbance for European forests. Wind disturbances have intensified over the last decades globally and are expected to further rise in view of the effects of climate change. Despite the importance of such natural disturbances, there are currently no spatially explicit databases of wind-related impact at a pan-European scale. Here, we present a new database of wind disturbances in European forests (FORWIND). FORWIND is comprised of more than 80 000 spatially delineated areas in Europe that were disturbed by wind in the period 2000-2018 and describes them in a harmonized and consistent geographical vector format. The database includes all major windstorms that occurred over the observational period (e.g. Gudrun, Kyrill, Klaus, Xynthia and Vaia) and represents approximately 30% of the reported damaging wind events in Europe. Correlation analyses between the areas in FORWIND and land cover changes retrieved from the Landsat-based Global Forest Change dataset and the MODIS Global Disturbance Index corroborate the robustness of FORWIND. Spearman rank coefficients range between 0.27 and 0.48 (p value < 0.05). When recorded forest areas are rescaled based on their damage degree, correlation increases to 0.54. Wind-damaged growing stock volumes reported in national inventories (FORESTORM dataset) are generally higher than analogous metrics provided by FORWIND in combination with satellite-based biomass and country-scale statistics of growing stock volume. The potential of FORWIND is explored for a range of challenging topics and scientific fields, including scaling relations of wind damage, forest vulnerability modelling, remote sensing monitoring of forest disturbance, representation of uprooting and breakage of trees in large-scale land surface models, and hydrogeological risks following wind damage. Overall, FORWIND represents an essential and open-access spatial source that can be used to improve the understanding, detection and prediction of wind disturbances and the consequent impacts on forest ecosystems and the land-atmosphere system. Data sharing is encouraged in order to continuously update and improve FORWIND
    corecore