7 research outputs found

    Avaliação da fitotoxicidade de correctivos orgânicos

    Get PDF
    Mestrado em Engenharia do Ambiente - Instituto Superior de Agronomia - ULActualmente, os solos de Portugal apresentam uma das piores qualidades a nível Europeu devido à sua carência de matéria orgânica. Uma das alternativas encontradas é a aplicação de correctivos orgânicos, como por exemplo, produtos obtidos através do processo de compostagem de resíduos que apresentam um alto teor em matéria orgânica e nutrientes. No entanto, a aplicação destes produtos só deve ser feita quando é certo que não apresenta na sua composição qualquer tipo de substância com efeito fitotóxico que possa inibir a germinação e/ou crescimento da semente e/ou planta. Caso contrário, ocorrerá o oposto do que se pretende. Assim, a fitotxicidde de um correctivo é um parâmetro importante que deve ser sempre avaliado. O presente trabalho tem como objectivo comparar diferentes métodos de avaliação da fitotoxicidade, que incluem três ensaios de germinação (Zucconi et al.,1981, Tiquia,1999 e EN-16086-2) e dois ensaios de crescimento (CCME e EN-16086-1), utilizando quatro correctivos orgânicos com origem em materiais iniciais diferentes, através de dois indicadores: o agrião (Lepidium sativum L.) e a couve chinesa (Brassica rapa chinensie L.). Os resultados obtidos, em geral, apresentaram valores superiores para os ensaios com a couve chinesa, indicando que esta espécie de indicador tem uma menor sensibilidade a substâncias com efeito fitotóxico comparativamente ao agrião. O método da norma europeia EN-16086-2 apresentou vários obstáculos, pois refere que os compostos devem apresentar um pH entre 5,5 e 6,5 e uma condutividade elétrica (CE) inferior a 800 S.cm-1, caso contrário, é necessário corrigi-los adicionando turfa. No entanto, devido às elevadas CE das amostras para se obter uma mistura com valor de CE dentro do indicado na norma, utilizou-se uma grande quantidade de turfa, que promoveu a diluição das substâncias com efeito fitotóxico presentes nos compostos, e por isso, os resultados obtidos para este método não foram representativosN/

    Predictive factors for cesarean delivery : a retrospective study

    Get PDF
    Background: Cesarean section rates have risen markedly worldwide. Considering the potential harm caused by this mode of delivery, and the general concern in reducing its incidence, it would be useful to individualize the risk of non-planned cesareans, and if there is any possibility, reduce that risk, and anesthesiologists should take part of this risk evaluation. In recent studies, many factors have been related with a higher risk of cesarean, and controversy still surrounds labor analgesia impact on cesarean risk. The aim of this study was to search for predictive factors for nonplanned cesarean delivery. Methods: Retrospective analysis of all labors occurred in our Obstetric Department during 2014. Maternal related factors, previous obstetric history, birth weight and factors related to labor analgesia and labor progression were studied. Our primary outcome was cesarean delivery. Results: We identified two independent predictive factors for cesarean delivery: birth weight (p=0,007 OR= 1,001 CI 95% [1,0003; 1,002]) and labor length since beginning of analgesia (p<0,0001 OR= 1,00005 CI 95%[1,00003; 1,00007]). Searching correlation between registered variables, maternal body mass index was positively associated with newborn birth weight (p<0.0001, R=0.157). Conclusion: Our study showed that birth weight and labor length since beginning of epidural analgesia are independent predictor factors of non-planned cesarean delivery. Furthermore, birth weight was associated with maternal body mass index, providing health professionals a modifiable factor in which we can intervene to improve outcome. As labor progression to cesarean is of major obstetric and anesthetic concern, multidisciplinary initiatives are warranted to clearly identify important variables concurring to operative delivery.info:eu-repo/semantics/publishedVersio

    Validation of the Portuguese Version of the Postoperative Quality Recovery Scale (PostopQRS)

    Get PDF
    Introduction: The Postoperative Quality Recovery Scale is a brief instrument of six domains designed to assess quality of recovery from early to long term after surgery. This study aims to validate the Portuguese version of the Postoperative Quality Recovery Scale. Material and Methods: In this observational study 101 adult patients undergoing elective surgery completed the Postoperative Quality Recovery Scale at 15 minutes and 40 minutes, one and three days after surgery. Three constructs were assessed for validity: increased recovery over time; effect of gender and recovery association with muscle strength. Reliability, responsiveness, feasibility and acceptability were also assessed. Results: Construct validity was shown by increased recovery over time; worse recovery for female patients in emotive, nociceptive, activities of daily living and overall recovery; improved muscle strength in recovered patients. Internal consistency for activities of daily living was acceptable at all-time points (Cronbach’s α value of 0.772 or higher), indicating scale reliability. The scale was able to detect differences in postoperative quality of recovery between the neuromuscular blockade reversal agents, neostigmine and sugammadex, indicating scale responsiveness. The time to conduct the Portuguese version at baseline was 95 - 581 seconds (median 319 seconds) and it was reduced with subsequent assessments. The proportion of patients completing all scale items was 87%, 75%, 65% and 94% for the four time periods evaluated, indicating scale feasibility and acceptability. Discussion: This study shows that the Portuguese version of the Postoperative Quality Recovery Scale, demonstrates construct validity, reliability, responsiveness, feasibility and acceptability. Conclusions: This study allowed validation of the Portuguese version of the Postoperative Quality Recovery Scale

    CR-POSSUM and Surgical Apgar Score as predictive factors for patients’ allocation after colorectal surgery

    No full text
    Background and objectives: Surgical patients frequently require admission in high-dependency units or intensive care units. Resources are scarce and there are no universally accepted admission criteria, so patients’ allocation must be optimized. The purpose of this study was to investigate the relationship between postoperative destination of patients submitted to colorectal surgery and the scores ColoRectal Physiological and Operative Severity Score for the enUmeration of Mortality and Morbidity (CR-POSSUM) and Surgical Apgar Score (SAS) and, secondarily find cut-offs to aid this allocation. Methods: A cross-sectional prospective observational study, including all adult patients undergoing colorectal surgery during a 2 years period. Data collected from the electronic clinical process and anesthesia records. Results: A total of 358 patients were included. Median score for SAS was 8 and CR-POSSUM had a median mortality probability of 4.5%. Immediate admission on high-dependency units/intensive care units occurred in 51 patients and late admission in 18. Scores from ward and high-dependency units/intensive care units patients were statistically different (SAS: 8 vs. 7, p < 0.001; CR-POSSUM: 4.4% vs. 15.9%, p < 0.001). Both scores were found to be predictors of immediate postoperative destination (p < 0.001). Concerning immediate high-dependency units/intensive care units admission, CR-POSSUM showed a strong association (AUC 0.78, p = 0.034) with a ≥9.16 cut-off point (sensitivity: 62.5%; specificity: 75.2%), outperforming SAS (AUC 0.67, p = 0.048), with a ≤7 cut-off point (sensitivity: 67.3%; specificity: 56.1%). Conclusions: Both CR-POSSUM and SAS were associated with the clinical decision to admit a patient to the high-dependency units/intensive care units immediately after surgery. CR-POSSUM alone showed a better discriminative capacity. Resumo: Justificativa e objetivos: Os pacientes cirúrgicos com frequência precisam de internação em unidade de alta dependência ou unidade de terapia intensiva. Os recursos são escassos e não há critérios de admissão universalmente aceitos; portanto, a alocação dos pacientes precisa ser aprimorada. O objetivo primário deste estudo foi investigar a relação entre o destino dos pacientes após cirurgia colorretal e o Índice de Apgar Cirúrgico (IAC) e o escore CR-POSSUM — do Inglês ColoRectal Physiological and Operative Severity Score for the enUmeration of Mortality and Morbidity — e, secundariamente, descobrir pontos de corte para auxiliar essa alocação. Métodos: Estudo prospectivo de observação transversal, incluiu todos os pacientes adultos submetidos à cirurgia colorretal durante um período de dois anos. Os dados foram coletados do prontuário clínico eletrônico e dos registros de anestesia. Resultados: Foram incluídos 358 pacientes. A mediana para o IAC foi 8 e para a probabilidade de mortalidade no CR-POSSUM, 4,5%. A admissão imediata em unidade de alta dependência/unidade de terapia intensiva ocorreu em 51 pacientes e a admissão tardia em 18. Os escores dos pacientes na enfermaria e na unidade de alta dependência/unidade de terapia intensiva foram estatisticamente diferentes (tempo de internação: 8 vs. 7, p < 0,001; CR-POSSUM: 4,4% vs. 15,9%, p < 0,001). Os dois escores foram preditivos do destino imediato pós-cirurgia (p < 0,001). Em relação à admissão imediata em UAD/UTI, CR-POSSUM mostrou uma forte associação (ASC 0,78; p = 0,034) com um ponto de corte ≥ 9,16 (sensibilidade: 62,5%; especificidade: 75,2%), superou o IAC (ASC 0,67, p = 0,048), com ponto de corte ≤ 7 (sensibilidade: 67,3%; especificidade: 56,1%). Conclusões: Tanto o CR-POSSUM quanto o IAC foram associados à decisão clínica de admitir um paciente em unidade de alta dependência/unidade de terapia intensiva imediatamente após a cirurgia. CR-POSSUM isolado mostrou uma capacidade discriminativa melhor. Keywords: CR-POSSUM, Apgar, Postoperative triage, Intensive care, Palavras-chave: CR-POSSUM, Apgar, Triagem pós-operatória, Terapia intensiv

    Risk assessment for major adverse cardiovascular events after noncardiac surgery using self-reported functional capacity: international prospective cohort study

    No full text
    Background: Guidelines endorse self-reported functional capacity for preoperative cardiovascular assessment, although evidence for its predictive value is inconsistent. We hypothesised that self-reported effort tolerance improves prediction of major adverse cardiovascular events (MACEs) after noncardiac surgery.Methods: This is an international prospective cohort study (June 2017 to April 2020) in patients undergoing elective noncardiac surgery at elevated cardiovascular risk. Exposures were (i) questionnaire-estimated effort tolerance in metabolic equivalents (METs), (ii) number of floors climbed without resting, (iii) self-perceived cardiopulmonary fitness compared with peers, and (iv) level of regularly performed physical activity. The primary endpoint was in-hospital MACE consisting of cardiovascular mortality, non-fatal cardiac arrest, acute myocardial infarction, stroke, and congestive heart failure requiring transfer to a higher unit of care or resulting in a prolongation of stay on ICU/intermediate care (&gt;= 24 h). Mixed-effects logistic regression models were calculated.Results: In this study, 274 (1.8%) of 15 406 patients experienced MACE. Loss of follow-up was 2%. All self-reported functional capacity measures were independently associated with MACE but did not improve discrimination (area under the curve of receiver operating characteristic [ROC AUC]) over an internal clinical risk model (ROC AUCbaseline 0.74 [0.71-0.77], ROC AUCbaseline+4METs 0.74 [0.71-0.77], ROC AUCbaseline+floors climbed 0.75 [0.71-0.78], AUCbaseline+fitness vs peers 0.74 [0.71-0.77], and AUCbaseline+physical activity 0.75 [0.72-0.78]).Conclusions: Assessment of self-reported functional capacity expressed in METs or using the other measures assessed here did not improve prognostic accuracy compared with clinical risk factors. Caution is needed in the use of self-reported functional capacity to guide clinical decisions resulting from risk assessment in patients undergoing noncardiac surgery. Clinical trial registration: NCT03016936

    Intraoperative transfusion practices and perioperative outcome in the European elderly: A secondary analysis of the observational ETPOS study

    No full text
    The demographic development suggests a dramatic growth in the number of elderly patients undergoing surgery in Europe. Most red blood cell transfusions (RBCT) are administered to older people, but little is known about perioperative transfusion practices in this population. In this secondary analysis of the prospective observational multicentre European Transfusion Practice and Outcome Study (ETPOS), we specifically evaluated intraoperative transfusion practices and the related outcomes of 3149 patients aged 65 years and older. Enrolled patients underwent elective surgery in 123 European hospitals, received at least one RBCT intraoperatively and were followed up for 30 days maximum. The mean haemoglobin value at the beginning of surgery was 108 (21) g/l, 84 (15) g/l before transfusion and 101 (16) g/l at the end of surgery. A median of 2 [1–2] units of RBCT were administered. Mostly, more than one transfusion trigger was present, with physiological triggers being preeminent. We revealed a descriptive association between each intraoperatively administered RBCT and mortality and discharge respectively, within the first 10 postoperative days but not thereafter. In our unadjusted model the hazard ratio (HR) for mortality was 1.11 (95% CI: 1.08–1.15) and the HR for discharge was 0.78 (95% CI: 0.74–0.83). After adjustment for several variables, such as age, preoperative haemoglobin and blood loss, the HR for mortality was 1.10 (95% CI: 1.05–1.15) and HR for discharge was 0.82 (95% CI: 0.78–0.87). Preoperative anaemia in European elderly surgical patients is undertreated. Various triggers seem to support the decision for RBCT. A closer monitoring of elderly patients receiving intraoperative RBCT for the first 10 postoperative days might be justifiable. Further research on the causal relationship between RBCT and outcomes and on optimal transfusion strategies in the elderly population is warranted. A thorough analysis of different time periods within the first 30 postoperative days is recommended

    Intraoperative transfusion practices in Europe

    No full text
    © 2016 The Author. Published by Oxford University Press on behalf of the British Journal of Anaesthesia.Background: Transfusion of allogeneic blood influences outcome after surgery. Despite widespread availability of transfusion guidelines, transfusion practices might vary among physicians, departments, hospitals and countries. Our aim was to determine the amount of packed red blood cells (pRBC) and blood products transfused intraoperatively, and to describe factors determining transfusion throughout Europe. Methods: We did a prospective observational cohort study enrolling 5803 patients in 126 European centres that received at least one pRBC unit intraoperatively, during a continuous three month period in 2013. Results: The overall intraoperative transfusion rate was 1.8%; 59% of transfusions were at least partially initiated as a result of a physiological transfusion trigger- mostly because of hypotension (55.4%) and/or tachycardia (30.7%). Haemoglobin (Hb)- based transfusion trigger alone initiated only 8.5% of transfusions. The Hb concentration [mean (sd)] just before transfusion was 8.1 (1.7) g dl-1 and increased to 9.8 (1.8) g dl-1 after transfusion. The mean number of intraoperatively transfused pRBC units was 2.5 (2.7) units (median 2). Conclusions: Although European Society of Anaesthesiology transfusion guidelines are moderately implemented in Europe with respect to Hb threshold for transfusion (7-9 g dl-1), there is still an urgent need for further educational efforts that focus on the number of pRBC units to be transfused at this threshold
    corecore