129 research outputs found

    The effect study of Acetic acid on the scoria performance in removing of Malachite from aquatic environments: Determination of model, isotherms and reaction kinetics

    Get PDF
    زمینه و هدف: رنگ مالاشیت به دلیل ویژگی های ساختاری قابلیت تجزیه بسیار کمی داشته و موجب ایجاد مشکل در محیط های آبی می شود. با توجه به تأثیر اسید استیک بر ویژگی های فیزیکی و شیمیایی اسکوریا، هدف از این مطالعه بررسی کارایی فرم های مختلف اصلاح شده اسکوریا با اسید استیک در حذف رنگ مالاشیت از محیط های آبی می باشد. روش بررسی: این مطالعه آزمایشگاهی بوده که در pH، دوز جاذب، زمان های تماس مختلف و غلظت ثابت رنگ انجام شد. سپس غلظت باقیمانده در محلول رنگ از طریق جذب به وسیله اسپکتروفتومتر در طول موج 665 نانومتر اندازه گیری شد. به منظور درک چگونگی جذب، داده های به دست آمده با ایزوترم های جذب لانگمیر، فروندلیچ و سینتیک های واکنش شبه درجه اول و دوم برازش داده شدند. برای تحلیل داده ها از نرم افزار DOE= Design of Experiments)) استفاده شد. یافته ها: نتایج نشان داد که با افزایش نرمالیته اسید، pH، دوز جاذب و زمان تماس، کارایی جاذب در حذف رنگ افزایش می یابد، به طوری که بیشترین راندمان حذف (100) برای جاذب اصلاح شده با اسید استیک 12 نرمال در 11 =pH، دوز جاذب 4/1 گرم در لیتر و زمان 75 دقیقه به دست آمد. همچنین جذب رنگ از هر دو ایزوترم لانگمیر و فروندلیچ و سینیتک شبه درجه دوم تبعیت مناسبی داشت. نتیجه گیری: با توجه به نتایج می توان گفت که جذب رنگ هم به صورت چند لایه ای و تک لایه ای صورت می گیرد و اصلاح اسکوریا با اسید استیک موجب افزایش کارایی آن نسبت به اسکوریا طبیعی می شود

    Sensitivity and specificity of chest computed tomography scan based on RT-PCR in COVID-19 diagnosis

    Get PDF
    Purpose: COVID-19 is a novel, severely contagious and progressive infection occurring worldwide. The diagnosis of the disease is based on real-time polymerase chain reaction (RT-PCR) and computed tomography (CT) scan, even though they are still controversial methods. Material and methods: We studied 54 patients with suspected COVID-19 and the two mentioned methods were compared with each other. Results: Sensitivity and specificity of the abnormal chest CT scan, ground-glass opacity (GGO), consolidation opacity, and both of GGO and consolidation were also surveyed based on RT-PCR. The results showed that RT-PCR assay was negative in 23 (42.6%) patients and positive in 31 (57.4%) cases. Also, the patients with an abnormal chest CT scan comprised 37 (68.5%). The sensitivity and specificity of abnormal CT scan were 78.6% and 42.3%, respectively, based on the RT-PCR method. Conclusions: Other techniques alongside CT scan and RT-PCR are advocated for accuracy of the COVID-19 diagnosis

    The comparison of plasma fibronectin in term and preterm delivery: A cross-sectional, descriptive-analytical study

    Get PDF
    Background: Preterm delivery is one of the main causes of infant death. Therefore, prediction of preterm delivery may eliminate a large number of prenatal complications. Objectives: The present study aimed to understand if preterm delivery can be predicted by assessing maternal plasma fibronectin concentration. Materials and Methods: Serum samples from 105 pregnant women participating in this study were collected. The plasma fibronectin were measured at 24-28 wk of gestation and again at 32-36 wk of gestation. Unfortunately, only 65 of the 105 pregnant women, returned for the second sampling. The plasma fibronectin was analyzed using ELISA method and its concentration in term and preterm deliveries was compared. The delivery dates of all the women were also recorded. Results: Out of 105 pregnant women, 28 delivered preterm (26.7%). The Plasma fibronectin concentrations in women with preterm delivery were higher than in those who delivered at term (p = 0.001). Accordingly, Plasma fibronectin concentrations were significantly higher in the second serum samples (p = 0.01). Plasma fibronectin concentrations was also higher in obese women and in those suffering from preeclampsia (p = 0.12) and gestational diabetes (p = 0.81). Conclusion: Plasma fibronectin concentrations test could be used as an optional screening test for preterm delivery at 28 to 34 wk of gestation in pregnant women who prefer to avoid vaginal sampling. Key words: Premature birth, Fibronectin, Maternal serum screening tests

    Duration of delayed diagnosis in HIV/AIDS patients in Iran: a CD4 depletion model analysis

    Get PDF
    ObjectiveDelayed diagnosis of HIV can lead to an inappropriate response to antiretroviral therapy (ART), rapid progression of the disease, and death. It can also carry harmful effects on public health due to the increment of transmission. This study aimed to estimate the duration of delayed diagnosis (DDD) in HIV patients in Iran.MethodsThis hybrid cross-sectional cohort study was conducted on the national HIV surveillance system database (HSSD). Linear mixed effect models with random intercept, random slope, and both were used to estimate the parameters required for the CD4 depletion model to determine the best-fitted model for DDD, stratified by the route of transmission, gender, and age group.ResultsThe DDD was estimated in 11,373 patients including 4,762 (41.87%) injection drug users (IDUs), 512 (4.5%) men who had sexual contact with men (MSM), 3,762 (33.08%) patients with heterosexual contacts, and 2,337 (20.55%) patients who were infected through other routes of HIV transmission. The total mean DDD was 8.41 ± 5.97 years. The mean DDD was 7.24 ± 0.08 and 9.43 ± 6.83 years in male and female IDUs, respectively. In the heterosexual contact group, DDD was obtained as 8.60 ± 6.43 years in male patients and 9.49 ± 7.17 years in female patients. It was also estimated as 9.37 ± 7.30 years in the MSM group. Furthermore, patients infected through other transmission routes were found with a DDD of 7.90 ± 6.74 years for male patients and a DDD of 7.87 ± 5.87 years for female patients.ConclusionA simple CD4 depletion model analysis is represented, which incorporates a pre-estimation step to determine the best-fitted linear mixed model for calculating the parameters required for the CD4 depletion model. Considering such a noticeably high HIV diagnostic delay, especially in older adults, MSM, and heterosexual contact groups, regular periodic screening is required to reduce the DDD

    Assessment of Moisture Status and Crop Production in Different Climate of Iran

    Get PDF
    Drought varies with regard to the time of occurrence, duration, intensity, and extent of the area affected from year to year. The objective of this study was therefore to gather and analyze standardized information on Role of Early Warning Systems by FAO56 and UNESCO models for cereals (wheat, barley, corn and rice), leguminous (bean, chickpea, lentil and alfalfa) and industrial crops (soybean, sunflower, canola, sugare beat, potato and cotton) in Iran environmental zones. To gather information on perceived risks and foreseen impacts of climatic factors on crops production, we designed a set of qualitative and quantitative data from agrometeorological and agriculture organizations in 44 stations in Iran (1961-2010). Annual average rainfall (mm.y-1) and ETo (mm.y-1) in stations with very dry climate are 76.56 and 3001.03, respectively, these rates for stations with dry climate are 195.41 mm.y-1 and 2249.44 mm.y-1, for stations with semi dry climate is 343.9 mm.y-1 and 1351.62 mm.y-1, for stations with semi humid climate is 583.8 mm.y-1 and 1153.4 mm.y-1 and for stations with humid climate is 1272.16 mm.y-1 and 949.91 mm.y-1. The maximum and minimum of Annual average rainfall happened in Rasht (1337.5 mm.y-1) and Zabol (57.7 mm.y-1) stations, and the maximum and minimum for Annual average ETo happened in Chabahar (3909.15 mm.y-1) and Anzali harbor (890.6 mm.y-1), respectively. Therefore, 13.63 percent of stations have suitable conditions for crop productions and 86.37 percent are in critical and unsustainable conditions

    Two-Step Estimation of the Impact of Contextual Variables on Technical Efficiency of Hospitals: The Case Study of Public Hospitals in Iran

    Get PDF
    Background: Measuring the efficiency and productivity of hospitals is a key tool to cost contamination and management that is very important for any healthcare system for having an efficient system. Objective: The purpose of this study is to examine the effects of contextual factors on hospital efficiency in Iranian public hospitals. Methods: This was a quantitative and descriptive-analytical study conducted in two steps. First, we measured the efficiency score of teaching and non-teaching hospitals by using the Data Envelopment Analysis (DEA) method. Second, the relationship between efficiency score and contextual factors was analyzed. We used median statistics (first and third quarters) to describe the concentration and distribution of each variable in teaching and non-teaching hospitals, then the Wilcoxon test was used to compare them. The Spearman test was used to evaluate the correlation between the efficiency of hospitals and contextual variables (province area, province population, population density, and the number of beds per hospital). Results: On average, the efficiency score in non-teaching hospitals in 31 provinces was 0.67 and for teaching hospitals was 0.54. Results showed that there is no significant relationship between the efficiency score and the number of hospitals in the provinces (p = 0.1 and 0.15, respectively). The relationship between the number of hospitals and the population of the province was significant and positive. Also, there was a positive relationship between the number of beds and the area of the province in both types of teaching and non-teaching hospitals. Conclusion: Multilateral factors influence the efficiency of hospitals and to address hospital inefficiency multi-intervention packages focusing on the hospital and its context should be developed. It is necessary to pay attention to contextual factors and organizational architecture to improve efficiency

    Association between inter-arm blood pressure difference and cardiovascular disease: result from baseline Fasa Adults Cohort Study

    Get PDF
    The inter-arm blood pressure difference has been advocated to be associated with cardiovascular mortality and morbidity. Our study aimed to investigate the association between Inter-arm systolic and diastolic blood pressure differences and Cardio Vascular Disease (CVD). A total of 10,126 participants aged 35–70 years old were enrolled in a prospective Fasa Persian Adult Cohort. In this cross-sectional study, the cutoff values for inter-arm blood pressure difference were less than 5, greater than 5, greater than 10, and greater than 15 mm Hg. Descriptive statistics and logistic regression were used to analyze the data. Based on the results the prevalence of ≥ 15 mmHg inter-arm systolic and diastole blood pressure difference (inter-arm SBPD and inter-arm DBPD) were 8.08% and 2.61%. The results of logistic regression analysis showed that inter-arm SBPD ≥ 15 and (OR<5/≥15 = 1.412; 95%CI = 1.099–1.814) and inter-arm DBPD ≥ 10 (OR<5/≥10 = 1.518; 95%CI = 1.238–1.862) affected the risk of CVD. The results showed that the differences in BP between the arms had a strong positive relationship with CVD. Therefore, inter-arm blood pressure could be considered a marker for the prevention and diagnosis of CVD for physicians

    Global, regional, and national burden of chronic kidney disease, 1990–2017 : a systematic analysis for the Global Burden of Disease Study 2017

    Get PDF
    Background Health system planning requires careful assessment of chronic kidney disease (CKD) epidemiology, but data for morbidity and mortality of this disease are scarce or non-existent in many countries. We estimated the global, regional, and national burden of CKD, as well as the burden of cardiovascular disease and gout attributable to impaired kidney function, for the Global Burden of Diseases, Injuries, and Risk Factors Study 2017. We use the term CKD to refer to the morbidity and mortality that can be directly attributed to all stages of CKD, and we use the term impaired kidney function to refer to the additional risk of CKD from cardiovascular disease and gout. Methods The main data sources we used were published literature, vital registration systems, end-stage kidney disease registries, and household surveys. Estimates of CKD burden were produced using a Cause of Death Ensemble model and a Bayesian meta-regression analytical tool, and included incidence, prevalence, years lived with disability, mortality, years of life lost, and disability-adjusted life-years (DALYs). A comparative risk assessment approach was used to estimate the proportion of cardiovascular diseases and gout burden attributable to impaired kidney function. Findings Globally, in 2017, 1·2 million (95% uncertainty interval [UI] 1·2 to 1·3) people died from CKD. The global all-age mortality rate from CKD increased 41·5% (95% UI 35·2 to 46·5) between 1990 and 2017, although there was no significant change in the age-standardised mortality rate (2·8%, −1·5 to 6·3). In 2017, 697·5 million (95% UI 649·2 to 752·0) cases of all-stage CKD were recorded, for a global prevalence of 9·1% (8·5 to 9·8). The global all-age prevalence of CKD increased 29·3% (95% UI 26·4 to 32·6) since 1990, whereas the age-standardised prevalence remained stable (1·2%, −1·1 to 3·5). CKD resulted in 35·8 million (95% UI 33·7 to 38·0) DALYs in 2017, with diabetic nephropathy accounting for almost a third of DALYs. Most of the burden of CKD was concentrated in the three lowest quintiles of Socio-demographic Index (SDI). In several regions, particularly Oceania, sub-Saharan Africa, and Latin America, the burden of CKD was much higher than expected for the level of development, whereas the disease burden in western, eastern, and central sub-Saharan Africa, east Asia, south Asia, central and eastern Europe, Australasia, and western Europe was lower than expected. 1·4 million (95% UI 1·2 to 1·6) cardiovascular disease-related deaths and 25·3 million (22·2 to 28·9) cardiovascular disease DALYs were attributable to impaired kidney function. Interpretation Kidney disease has a major effect on global health, both as a direct cause of global morbidity and mortality and as an important risk factor for cardiovascular disease. CKD is largely preventable and treatable and deserves greater attention in global health policy decision making, particularly in locations with low and middle SDI

    Morbidity and mortality from road injuries: results from the Global Burden of Disease Study 2017

    Get PDF
    BackgroundThe global burden of road injuries is known to follow complex geographical, temporal and demographic patterns. While health loss from road injuries is a major topic of global importance, there has been no recent comprehensive assessment that includes estimates for every age group, sex and country over recent years.MethodsWe used results from the Global Burden of Disease (GBD) 2017 study to report incidence, prevalence, years lived with disability, deaths, years of life lost and disability-adjusted life years for all locations in the GBD 2017 hierarchy from 1990 to 2017 for road injuries. Second, we measured mortality-to-incidence ratios by location. Third, we assessed the distribution of the natures of injury (eg, traumatic brain injury) that result from each road injury.ResultsGlobally, 1 243 068 (95% uncertainty interval 1 191 889 to 1 276 940) people died from road injuries in 2017 out of 54 192 330 (47 381 583 to 61 645 891) new cases of road injuries. Age-standardised incidence rates of road injuries increased between 1990 and 2017, while mortality rates decreased. Regionally, age-standardised mortality rates decreased in all but two regions, South Asia and Southern Latin America, where rates did not change significantly. Nine of 21 GBD regions experienced significant increases in age-standardised incidence rates, while 10 experienced significant decreases and two experienced no significant change.ConclusionsWhile road injury mortality has improved in recent decades, there are worsening rates of incidence and significant geographical heterogeneity. These findings indicate that more research is needed to better understand how road injuries can be prevented
    corecore