48 research outputs found

    Efecto de la modalidad de diálisis y otros factores de prescripción sobre las pérdidas proteicas peritoneales en diálisis peritoneal

    Get PDF
    [Abstract] Background: There is a deficit of information regarding the factors that influence peritoneal protein excretion (PPE) during PD therapy. In particular, the effects of the modality of PD and other conditions of the dialysis prescription remain unclear. Method: This prospective, observational study analysed the effects of prescription characteristics on 24-hour PPE (study variable) in a cohort of patients starting PD. Our statistical analysis included a multi-level mixed model and standardised estimations of peritoneal protein transport during serial four-hour peritoneal equilibrium tests in order to control for disparities in the characteristics of patients managed on different regimens. Results: We evaluated 284 patients, 197 on CAPD and 87 on automated PD (APD), at the start of PD treatment. The two groups differed in terms of clinical characteristics and peritoneal function. Univariate, serial estimates of 24-hour PPE were marginally higher in CAPD patients, and remained essentially stable over time in both groups. Multivariate analyses identified CAPD (B=888.5mg, 95% CI: 327.5/1448.6), total dialysate volume infused per day (B=275.9 mg/Ll; 153.9/397.9) and ultrafiltration (B=0.41 mg/mL; 0.02/0.80) as independent predictors of 24-hour PPE. The model also revealed a minor trend for a lower 24-hour PPE as time on PD increases. Conclusions: The individual characteristics of peritoneal protein transport are the major determinants of 24-hour PPE. The use of CAPD as the dialysis modality is associated with higher PPE rates than the APD technique, although this difference is counterbalanced by a direct correlation between PPE and the volume of dialysate infused per day. Ultrafiltration and time on dialysis also act as minor independent predictors of PPE during PD therapy.[Resumen] Antecedentes: Existe información insuficiente sobre los factores que influyen en las pérdidas proteicas peritoneales (PPP) durante el tratamiento con diálisis peritoneal (DP). En particular, se desconoce el efecto que la modalidad de DP y otras condiciones de prescripción pueden tener sobre esta variable. Método: Siguiendo un diseño prospectivo y observacional, analizamos el efecto de las condiciones de prescripción de DP sobre las PPP en 24 horas (variable principal) en una cohorte de pacientes incidentes en DP. La estrategia de análisis incluyó análisis estadístico mediante modelos mixtos multinivel y estimaciones estandarizadas del transporte proteico peritoneal durante pruebas de equilibrio peritoneal seriadas, con el fin de ajustar para desigualdades en las características de las poblaciones manejadas con diferentes pautas de prescripción. Resultados: Estudiamos 284 pacientes, 197 en DP continua ambulatoria (DPCA) y 87 en DP automática (DPA) al inicio de seguimiento. Ambos grupos mostraron diferencias significativas en sus características clínicas y de función peritoneal. Las estimaciones seriadas de las PP de 24 horas mostraron valores marginalmente más altos en DPCA, permaneciendo esencialmente estables durante el seguimiento. El análisis multivariante identificó a la DPCA (B = 888,5 mg, intervalo de confianza 95%: 327,5/1448,6), el volumen total de dializado infundido (B = 275,9 mg/l, 153,9/397,9) y la ultrafiltración diaria (B = 0,41 mg/ml, 0,02/0,80) como predictores independientes de las PPP de 24 horas. El modelo también mostró tendencia a disminución en las PPP con el tiempo en DP. Conclusiones: Las características individuales de transporte peritoneal de proteínas son el principal determinante de las PPP en 24 horas. La pauta de cambios largos (DPCA) asocia mayores PPP que la de cambios cortos (DPA), aunque esta diferencia se compensa en la práctica por la correlación positiva entre PPP y volumen infundido. Ultrafiltración y tiempo en diálisis son predictores secundarios de PPP en 24 horas

    Clinical-pathological characteristics and prognosis of a cohort of oesophageal cancer patients: a competing risks survival analysis

    Get PDF
    [Abstract] Background: To determine the clinical course, follow-up strategies, and survival of oesophageal cancer patients using a competing risks survival analysis. Methods: We conducted a retrospective and prospective follow-up study. The study included 180 patients with a pathological diagnosis of oesophageal cancer in A Coruña, Spain, between 2003 and 2008. The Kaplan-Meier methodology and competing risks survival analysis were used to calculate the specific survival rate. The study was approved by the Ethics Review Board (code 2011/372, CEIC Galicia). Results: The specific survival rate at the first, third, and fifth years was 40.2%, 18.1%, and 12.4%, respectively. Using the Kaplan-Meier methodology, the survival rate was slightly higher after the third year of follow-up. In the multivariate analysis, poor prognosis factors were female sex (hazard ratio [HR] 1.94; 95% confidence interval [CI], 1.24-3.03), Charlson's comorbidity index (HR 1.17; 95% CI, 1.02-1.33), and stage IV tumours (HR 1.70; 95% CI, 1.11-2.59). The probability of dying decreased with surgical and oncological treatment (chemotherapy and/or radiotherapy) (HR 0.23; 95% CI, 0.12-0.45). The number of hospital consultations per year during the follow-up period, from diagnosis to the appearance of a new event (local recurrences, newly appeared metastasis, and newly appeared neoplasias) did not affect the probability of survival (HR 1.03; 95% CI, 0.92-1.15). Conclusions: The Kaplan-Meier methodology overestimates the survival rate in comparison to competing risks analysis. The variables associated with a poor prognosis are female sex, Charlson's comorbidity score and extensive tumour invasion. Type of follow-up strategy employed after diagnosis does not affect the prognosis of the disease

    Identification of targets for prevention of peritoneal catheter tunnel and exit-site infections in low incidence settings

    Get PDF
    [Abstract] ♦ Background: Peritoneal catheter tunnel and exit-site infection (TESI) complicates the clinical course of peritoneal dialysis (PD) patients. Adherence to recommendations for catheter insertion, exit-site care, and management of Staphylococcus aureus (SAu) carriage reduces, but does not abrogate the risk of these infections. ♦ Objective: To reappraise the risk profile for TESI in an experienced center with a long-term focus on management of SAu carriage and a low incidence of these infections. ♦ Method: Following a retrospective, observational design, we investigated 665 patients incident on PD. The main study variable was survival to the first episode of TESI. We considered selected demographic, clinical, and technical variables, applying multivariate strategies of analysis. ♦ Main results: The overall incidence of TESI was 1 episode/68.5 patient-months. Staphylococcus aureus carriage disclosed at inception of PD (but not if observed sporadically during follow-up) (hazard ratio [HR] 1.53, p = 0.009), PD started shortly after catheter insertion (HR 0.98 per day, p = 0.011), PD after kidney transplant failure (HR 2.18, p = 0.017), lower hemoglobin levels (HR 0.88 per g/dL, p = 0.013) and fast peritoneal transport rates (HR 2.92, p = 0.03) portended an increased risk of TESI. Delaying PD ≥ 30 days after catheter insertion markedly improved the probability of TESI. Carriage of methicillin-resistant SAu since the start of PD was associated with a high incidence of TESI by these bacteria. On the contrary, resistance to mupirocin did not predict such a risk, probably due to the use of an alternative regime in affected patients. ♦ Conclusions: Adherence to current recommendations results in a low incidence of TESI in PD patients. Interventions on specific risk subsets have a potential to bring incidence close to negligible levels. Despite systematic screening and management, SAu carriage is still a predictor of TESI. Antibiotic susceptibility patterns may help to refine stratification of the risk of TESI by these bacteria. Early insertion of the peritoneal catheter should be considered whenever possible, to reduce the risk of later TESI

    Variabilidad entre el ángulo de Clarke y el índice de Chippaux- Smirak para el diagnóstico de pie plano

    Get PDF
    [Abstract] Background: The measurements used in diagnosing biomechanical pathologies vary greatly. The aim of this study was to determine the concordance between Clarke's angle and Chippaux-Smirak index, and to determine the validity of Clarke's angle using the Chippaux-Smirak index as a reference. Methods: Observational study in a random population sample (n= 1,002) in A Coruña (Spain). After informed patient consent and ethical review approval, a study was conducted of anthropometric variables, Charlson comorbidity score, and podiatric examination (Clarke's angle and Chippaux-Smirak index). Descriptive analysis and multivariate logistic regression were performed. Results: The prevalence of flat feet, using a podoscope, was 19.0% for the left foot and 18.9% for the right foot, increasing with age. The prevalence of flat feet according to the Chippaux-Smirak index or Clarke's angle increases significantly, reaching 62.0% and 29.7% respectively. The concordance (kappa I) between the indices according to age groups varied between 0.25-0.33 (left foot) and 0.21-0.30 (right foot). The intraclass correlation coefficient (ICC) between the Chippaux-Smirak index and Clarke's angle was -0.445 (left foot) and -0.424 (right foot). After adjusting for age, body mass index (BMI), comorbidity score and gender, the only variable with an independent effect to predict discordance was the BMI (OR= 0.969; 95% CI: 0.940-0.998). Conclusion: There is little concordance between the indices studied for the purpose of diagnosing foot arch pathologies. In turn, Clarke's angle has a limited sensitivity in diagnosing flat feet, using the Chippaux-Smirak index as a reference. This discordance decreases with higher BMI values.[Resumen] Introducción: Existe una gran variabilidad en las mediciones para el diagnóstico de la patología biomecánica. El objetivo de este estudio fue determinar la concordancia entre el ángulo de Clarke y el índice de Chippaux-Smirak, para determinar la validez del ángulo de Clarke utilizando como referencia el índice de Chippaux-Smirak. Métodos: Se realizó un estudio observacional en una muestra aleatoriamente seleccionada (n= 1,002) en A Coruña (España). Tras el consentimiento informado del paciente y la aprobación del comité de ética, se estudiaron variables, antropométricas, índice de comorbilidad de Charlson y un examen podológico (ángulo de Clarke, índice de Chippaux-Smirak). Se realizó un estudio descriptivo y un análisis multivariado de regresión logística. Resultados: La prevalencia de pie plano utilizando el podoscopio fue de 19.0%(pie izquierdo) y 18.9%(pie derecho), incrementándose con la edad. La prevalencia de pie plano según el índice ChippauxSmirak o el ángulo de Clarke se incrementan considerablemente llegando a 62.0% y 29.7%. La concordancia (kappa I) entre los índices según grupos de edad oscila entre 0.25-0.33 (pie izquierdo) y 0.210.30 (pie derecho). El coeficiente de correlación intraclase (CCI) entre el índice de Chippaux-Smirak y el ángulo de Clarke es -0.445 (pie izquierdo) y 0.424 (pie derecho). Tras ajustar por edad, índice de masa corporal (IMC), score de comorbilidad y sexo la única variable con un efecto independiente para predecir discordancia es el IMC (OR= 0.969; IC 95%: 0.94-0.998). Conclusiones: La concordancia entre los índices estudiados para el diagnóstico de la patología del arco plantar es reducida. Existe a su vez una reducida sensibilidad del ángulo de Clarke para el diagnóstico de pie plano, utilizando como referencia el índice de Chippaux-Smirak. Esta discordancia disminuye con valores más altos de IMC

    Inhibition of gastric acid secretion by H2 receptor antagonists associates a definite risk of enteric peritonitis and infectious mortality in patients treated with peritoneal dialysis

    Get PDF
    [Abstract] Background. Evidences linking treatment with inhibitors of gastric acid secretion (IGAS) and an increased risk of serious infections are inconclusive, both in the population at large and in the particular case of patients with chronic kidney disease. We have undertaken an investigation to disclose associations between treatment with IGAS and infectious outcomes, in patients undergoing chronic Peritoneal Dialysis (PD). Method. Observational, historic cohort, single center design. Six hundred and ninety-one patients incident on PD were scrutinized for an association among treatment with IGAS (H2 antagonists H2A or proton pump inhibitors PPI) (main study variable), on one side, and the risks of enteric peritoneal infection (main outcome), overall peritoneal infection, and general and infectious mortality (secondary outcomes). We applied a three-step multivariate approach, based on classic Cox models (baseline variables), time-dependent analyses and, when appropriate, competing risk analyses. Main results. The clinical characteristics of patients treated with H2A, PPI or none of these were significantly different. Multivariate analyses disclosed a consistently increased risk of enteric peritonitis in patients treated with IGAS (RR 1.65, 95% CI 1.08–2.55, p = 0.018, Cox). Stratified analysis indicated that patients treated with H2A, rather than those on PPI, supported the burden of this risk. Similar findings applied for the risk of infectious mortality. On the contrary, we were not able to detect any association among the study variables, on one side, and the general risks of peritonitis or mortality, on the other. Conclusions. Treatment with IGAS associates increased incidences of enteric peritonitis and infectious mortality, among patients on chronic PD. The association is clear in the case of H2A but less consistent in the case of PPI. Our results support the convenience of preferring PPI to H2A, for gastric acid inhibition in PD patients

    Performance analysis of SSE and AVX instructions in multi-core CPUs and GPU computing on FDTD scheme for solid and fluid vibration problems

    Get PDF
    In this work a unified treatment of solid and fluid vibration problems is developed by means of the Finite-Difference Time-Domain (FDTD). The scheme here proposed takes advantage from a scaling factor in the velocity fields that improves the performance of the method and the vibration analysis in heterogenous media. Moreover, the scheme has been extended in order to simulate both the propagation in porous media and the lossy solid materials. In order to accurately reproduce the interaction of fluids and solids in FDTD both time and spatial resolutions must be reduced compared with the set up used in acoustic FDTD problems. This aspect implies the use of bigger grids and hence more time and memory resources. For reducing the time simulation costs, FDTD code has been adapted in order to exploit the resources available in modern parallel architectures. For CPUs the implicit usage of the advanced vectorial extensions (AVX) in multi-core CPUs has been considered. In addition, the computation has been distributed along the different cores available by means of OpenMP directives. Graphic Processing Units have been also considered and the degree of improvement achieved by means of this parallel architecture has been compared with the highly-tuned CPU scheme by means of the relative speed up. The speed up obtained by the parallel versions implemented were up to 3 (AVX and OpenMP) and 40 (CUDA) times faster than the best sequential version for CPU that also uses OpenMP with auto-vectorization techniques, but non includes implicitely vectorial instructions. Results obtained with both parallel approaches demonstrate that massive parallel programming techniques are mandatory in solid-vibration problems with FDTD.The work is partially supported by the “Ministerio de Economía y Competitividad” of Spain under project FIS2011-29803-C02-01, by the Spanish Ministry of Education (TIN2012-34557), by the “Generalitat Valenciana” of Spain under projects PROMETEO/2011/021 and ISIC/2012/013, and by the “Universidad de Alicante” of Spain under project GRE12-14

    Effect of diagnostic delay on survival in patients with colorectal cancer: a retrospective cohort study

    Get PDF
    [Abstract] Background. Disparate and contradictory results make studies necessary to investigate in more depth the relationship between diagnostic delay and survival in colorectal cancer (CRC) patients. The aim of this study is to analyse the relationship between the interval from first symptom to diagnosis (SDI) and survival in CRC. Methods. Retrospective study of n = 942 CRC patients. SDI was calculated as the time from the diagnosis of cancer and the first symptoms of CRC. Cox regression was used to estimate five-year mortality hazard ratios as a function of SDI, adjusting for age and gender. SDI was modelled according to SDI quartiles and as a continuous variable using penalized splines. Results. Median SDI was 3.4 months. SDI was not associated with stage at diagnosis (Stage I = 3.6 months, Stage II-III = 3.4, Stage IV = 3.2; p = 0.728). Shorter SDIs corresponded to patients with abdominal pain (2.8 months), and longer SDIs to patients with muchorrhage (5.2 months) and rectal tenesmus (4.4 months). Adjusting for age and gender, in rectum cancers, patients within the first SDI quartile had lower survival (p = 0.003), while in colon cancer no significant differences were found (p = 0.282). These results do not change after adjusting for TNM stage. The splines regression analysis revealed that, for rectum cancer, 5-year mortality progressively increases for SDIs lower than the median (3.7 months) and decreases as the delay increases until approximately 8 months. In colon cancer, no significant relationship was found between SDI and survival. Conclusions. Short diagnostic intervals are significantly associated with higher mortality in rectal but not in colon cancers, even though a borderline significant effect is also observed in colon cancer. Longer diagnostic intervals seemed not to be associated with poorer survival. Other factors than diagnostic delay should be taken into account to explain this “waiting-time paradox”.Instituto de Salud Carlos III; PI14/ 00781Xunta de Galicia; PGIDIT06BTF91601P

    Diagnostic and treatment delay, quality of life and satisfaction with care in colorectal cancer patients: a study protocol

    Get PDF
    [Abstract] Background. Due to recent improvements in colorectal cancer survival, patient-reported outcomes, including health-related quality of life and satisfaction with care, have become well-established endpoints to determine the impact of the disease on the lives of patients. The aim of this study is to determine prospectively, in a cohort of colorectal cancer incident cases: a) health-related quality of life, b) satisfaction with hospital-based care, and c) functional status. A secondary objective is to determine whether diagnostic/therapeutic delay influence quality of life or patients’ satisfaction levels. Methods/design. Single-centre prospective follow-up study of colorectal cancer patients diagnosed during the period 2011–2012 (n = 375). This project was approved by the corresponding ethics review board, and informed consent is obtained from each patient. After diagnosis, patients are interviewed by a trained nurse, obtaining information on sociodemographic characteristics, family history of cancer, first symptoms, symptom perception and reaction to early symptoms. Quality of life is assessed with the EORTC QLQ-C30 and QLQ- CR29 questionnaires, and patients’ satisfaction with care is determined using the EORTC IN-PATSAT32. Functional status is measured with the Karnofsky Performance Status Scale. Clinical records are also reviewed to collect information on comorbidity, tumour characteristics, treatment, hospital consultations and exploratory procedures. Symptoms-to-diagnosis interval is defined as the time from the date of first symptoms until the cytohistological confirmation of cancer. Treatment delay is defined as the time between diagnosis and surgical treatment. All the patients will be followed-up for a maximum of 2 years. For survivors, assessments will be re-evaluated at one and two years after the diagnosis. Multiple linear/logistic regression models will be used to identify variables associated with the patients’ functional status, quality of life and satisfaction with care score. Changes in quality of life over time will be analysed with linear mixed-effects regression models. Discussion. The results will provide a deeper understanding of the impact of colorectal cancer from a more patient-centred approach, allowing us to identify groups of patients in need of additional attention, as well as areas for improvement. Special attention will be given to the relationship between diagnostic/therapeutic delay and patients’ quality of life and satisfaction with the care received.Instituto de Salud Carlos III; PI10/02285Galicia. Consellería de Economía e Industsria; 10CSA916052P

    A randomized clinical trial to determine the effect of angiotensin inhibitors reduction on creatinine clearance and haemoglobin in heart failure patients with chronic kidney disease and anaemia

    Get PDF
    Trial registration: EudraCT: 2008-008480-10[Abstract] Background. Chronic kidney disease is a common comorbidity in elderly patients with heart failure. Evidence supports the use of angiotensin inhibitors for patients with heart failure. However, there is little evidence with which to assess the risk and benefits of this treatment in elderly patients with renal dysfunction. Objective. To determine the efficacy and safety of angiotensin inhibitor reduction in patients with heart failure, chronic kidney disease and anaemia. Study design. Open randomized controlled clinical trial. Setting. Complexo Hospitalario Universitario A Coruña (Spain). Patients. Patients ≥ 50 years old, with heart failure, haemoglobin (Hb) < 12 mg/dl and creatinine clearance <60 ml/min/1.73 m2 admitted to hospital, in treatment with angiotensin inhibitors. Informed consent and Ethical Review Board approval were obtained. Intervention. A 50% reduction of angiotensin inhibitor dose of the basal treatment on admission (n = 30) in the intervention group. Control group (n = 16) with the standard basal dose. Main outcome measure. Primary outcome was difference in Hb (gr/dl), creatinine clearance (ml/min/1.73 m2) and protein C (mg/dl) between admission and 1–3 months after discharge. Secondary outcome was survival at 6–12 months after discharge. Results. Patients in the intervention group experienced an improvement in Hb (10.62–11.47 g/dl), creatinine clearance (32.5 ml/min/1.73 m2 to 42.9 ml/min/1.73 m2), and a decrease in creatinine levels (1.98–1.68 mg/dl) and protein C (3.23 mg/dl to 1.37 mg/dl). There were no significant differences in these variables in the control group. Survival at 6 and 12 months in the intervention and control group was 86.7% vs. 75% and 69.3% vs. 50%, respectively. Conclusion. The reduction of the dose of angiotensin inhibitors in the intervention group resulted in an improvement in anaemia and kidney function, decreased protein C and an increased survival rate

    Concordance among methods of nutritional assessment in patients included on the waiting list for liver transplantation

    Get PDF
    [Abstract] Background: The aim of the present study was to determine the extent of malnutrition in patients waiting for a liver transplant. The agreement among the methods of nutritional assessment and their diagnostic validity were evaluated. Methods: Patients on the waiting list for liver transplantation (n = 110) were studied. The variables were: body mass index, analytical parameters, liver disease etiology, and complications. Liver dysfunction was evaluated using the Child-Pugh Scale. Nutritional state was studied using the Controlling Nutritional Status (CONUT), the Spanish Society of Parenteral and Enteral Nutrition (SENPE) criteria, the Nutritional Risk Index (NRI), the Prognostic Nutritional Index (PNI-O), and the Subjective Global Assessment (SGA). Agreement was determined using the Kappa index. Area under receiver operator characteristic curves (AUCs), the Youden index (J), and likelihood ratios were computed. Results: Malnutrition varied depending on the method of evaluation. The highest value was detected using the CONUT (90.9%) and the lowest using the SGA (50.9%). The pairwise agreement among the methods ranged from K = 0.041 to K = 0.826, with an overall agreement of each criteria with the remaining methods between K = 0.093 and K = 0.364. PNI-O was the method with the highest overall agreement. Taking this level of agreement into account, we chose the PNI-O as a benchmark method of comparison. The highest positive likelihood ratio for the diagnosis of malnutrition was obtained from the Nutritional Risk Index (13.56). Conclusions: Malnutrition prevalence is high and prevalence estimates vary according the method used, with low concordance among methods. PNI-O and NRI are the most consistent methods to identify malnutrition in these patients.Instituto de Salud Carlos III; PI11/012
    corecore