16 research outputs found
Cholestasis impaired spatial and non-spatial novelty detection in mice
Bile duct ligation (BDL) is shown to induce cholestasis-related liver function impairments as well as consequent cognitive dysfunctions (i.e. impaired learning and memory formation). This study investigates the effects of cholestasis (14, 21 and 28 days post bile duct ligation) on spatial and non-spatial novelty detection, using a non-associative task. Male mice weighing 30-35 g were used. Cholestasis was induced by ligation of the main bile duct using two ligatures and transecting the duct at the midpoint between them. Open field paradigm was employed to assess the spatial and non-spatial memories retention. Our data showed that cholestasis (28 days after bile duct ligation) decrease and increased duration time of displace and non-displace objects respectively, indicating spatial memory deficit. Moreover, this intervention (28 days after bile duct ligation) decreased and did not alter duration time of substitute and non-substitute objects respectively, suggesting non-spatial memory deficit. Moreover, the data postulated that 14 and 21 days post bile duct ligation both spatial and non-spatial memories did not alter. Our results suggested that cholestasis (28 but not 14 and 21 days post bile duct ligation) impaired spatial and non-spatial memory in the mice
A Scheduling Algorithm to Maximize Storm Throughput in Heterogeneous Cluster
In the most popular distributed stream processing frameworks (DSPFs),
programs are modeled as a directed acyclic graph. This model allows a DSPF to
benefit from the parallelism power of distributed clusters. However, choosing
the proper number of vertices for each operator and finding an appropriate
mapping between these vertices and processing resources have a determinative
effect on overall throughput and resource utilization; while the simplicity of
current DSPFs' schedulers leads these frameworks to perform poorly on
large-scale clusters. In this paper, we present the design and implementation
of a heterogeneity-aware scheduling algorithm that finds the proper number of
the vertices of an application graph and maps them to the most suitable cluster
node. We start to scale up the application graph over a given cluster
gradually, by increasing the topology input rate and taking new instances from
bottlenecked vertices. Our experimental results on Storm Micro-Benchmark show
that 1) the prediction model estimate CPU utilization with 92% accuracy. 2)
Compared to default scheduler of Storm, our scheduler provides 7% to 44%
throughput enhancement. 3) The proposed method can find the solution within 4%
(worst case) of the optimal scheduler which obtains the best scheduling
scenario using an exhaustive search on problem design space
Assessment of neuropsychiatric indicators in children and adolescents with primary brain tumors and other brain space occupying lesions before and after surgery
Objective
Cognitive abilities may be impaired due to brain lesions in children and adolescents. This study aimed to investigate neuropsychiatric indicators in children and adolescents with primary brain tumor and other brain space occupying lesions (SOL) before and after surgical procedure.
Methods & materials
the current study is a pre and post study which was conducted on 81 patients with brain space occupying lesions aged less than 18. Patients with metastatic brain tumors were excluded. The study was performed between 20 December 2016 to 20 December 2017 on patients hospitalized in neurosurgery ward of Imam Reza university hospital, Tabriz, Iran. Before and after surgical procedure, Digit span forward and backward Task (to assess working memory), Stroop task and Trail Making Task A & B (to assess attention) and Rey Osterrieth Complex Figure Test (to assess Visual Spatial Memory) were done. Then, scores of tests were compared with normal values as well as the post-surgery scores.
Results
the most prevalent type of space occupying brain lesion was medulloblastoma and the most common region of involvement was posterior fossa tumor. Scores of all tests after surgery comparing to before surgery were significantly improved (P<0.05). In assessment of Digit span forward and backward Task with standard scores, there was no significant difference among scores of patients before surgery with the standard value (P>0.05). Regarding scores of various stages of Rey Osterrieth Complex Figure Test, the scores of immediate recall stage was significantly low (P<0.05). Among Trail Making Task A & B and stroop task, before surgery, just the Trail Making Task A & B was significantly increased (P<0.05). Scores of Trail Making Task A was significantly higher in patients with medulloblastoma and anatomically in left temporal tumors which indicate greater damage of attention field (P<0.05). In addition, in cerebellar tumor, scores of immediate recall stage of Rey Osterrieth Complex Figure Test was significantly lower (P<0.05).
Conclusions
Visuo-Spatial Memory and attention in pre-surgery assessments was significantly impaired comparing to general population (P<0.05). Working memory, Visuo-Spatial Memory and attention showed improvement comparing to pre-surgery. Deficits in attention domain was greater in medulloblastoma
Intraperitoneal drain placement and outcomes after elective colorectal surgery: international matched, prospective, cohort study
Despite current guidelines, intraperitoneal drain placement after elective colorectal surgery remains widespread. Drains were not associated with earlier detection of intraperitoneal collections, but were associated with prolonged hospital stay and increased risk of surgical-site infections.Background Many surgeons routinely place intraperitoneal drains after elective colorectal surgery. However, enhanced recovery after surgery guidelines recommend against their routine use owing to a lack of clear clinical benefit. This study aimed to describe international variation in intraperitoneal drain placement and the safety of this practice. Methods COMPASS (COMPlicAted intra-abdominal collectionS after colorectal Surgery) was a prospective, international, cohort study which enrolled consecutive adults undergoing elective colorectal surgery (February to March 2020). The primary outcome was the rate of intraperitoneal drain placement. Secondary outcomes included: rate and time to diagnosis of postoperative intraperitoneal collections; rate of surgical site infections (SSIs); time to discharge; and 30-day major postoperative complications (Clavien-Dindo grade at least III). After propensity score matching, multivariable logistic regression and Cox proportional hazards regression were used to estimate the independent association of the secondary outcomes with drain placement. Results Overall, 1805 patients from 22 countries were included (798 women, 44.2 per cent; median age 67.0 years). The drain insertion rate was 51.9 per cent (937 patients). After matching, drains were not associated with reduced rates (odds ratio (OR) 1.33, 95 per cent c.i. 0.79 to 2.23; P = 0.287) or earlier detection (hazard ratio (HR) 0.87, 0.33 to 2.31; P = 0.780) of collections. Although not associated with worse major postoperative complications (OR 1.09, 0.68 to 1.75; P = 0.709), drains were associated with delayed hospital discharge (HR 0.58, 0.52 to 0.66; P < 0.001) and an increased risk of SSIs (OR 2.47, 1.50 to 4.05; P < 0.001). Conclusion Intraperitoneal drain placement after elective colorectal surgery is not associated with earlier detection of postoperative collections, but prolongs hospital stay and increases SSI risk
Recommended from our members
Global burden of 288 causes of death and life expectancy decomposition in 204 countries and territories and 811 subnational locations, 1990–2021: a systematic analysis for the Global Burden of Disease Study 2021
BACKGROUND Regular, detailed reporting on population health by underlying cause of death is fundamental for public health decision making. Cause-specific estimates of mortality and the subsequent effects on life expectancy worldwide are valuable metrics to gauge progress in reducing mortality rates. These estimates are particularly important following large-scale mortality spikes, such as the COVID-19 pandemic. When systematically analysed, mortality rates and life expectancy allow comparisons of the consequences of causes of death globally and over time, providing a nuanced understanding of the effect of these causes on global populations. METHODS The Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2021 cause-of-death analysis estimated mortality and years of life lost (YLLs) from 288 causes of death by age-sex-location-year in 204 countries and territories and 811 subnational locations for each year from 1990 until 2021. The analysis used 56 604 data sources, including data from vital registration and verbal autopsy as well as surveys, censuses, surveillance systems, and cancer registries, among others. As with previous GBD rounds, cause-specific death rates for most causes were estimated using the Cause of Death Ensemble model-a modelling tool developed for GBD to assess the out-of-sample predictive validity of different statistical models and covariate permutations and combine those results to produce cause-specific mortality estimates-with alternative strategies adapted to model causes with insufficient data, substantial changes in reporting over the study period, or unusual epidemiology. YLLs were computed as the product of the number of deaths for each cause-age-sex-location-year and the standard life expectancy at each age. As part of the modelling process, uncertainty intervals (UIs) were generated using the 2·5th and 97·5th percentiles from a 1000-draw distribution for each metric. We decomposed life expectancy by cause of death, location, and year to show cause-specific effects on life expectancy from 1990 to 2021. We also used the coefficient of variation and the fraction of population affected by 90% of deaths to highlight concentrations of mortality. Findings are reported in counts and age-standardised rates. Methodological improvements for cause-of-death estimates in GBD 2021 include the expansion of under-5-years age group to include four new age groups, enhanced methods to account for stochastic variation of sparse data, and the inclusion of COVID-19 and other pandemic-related mortality-which includes excess mortality associated with the pandemic, excluding COVID-19, lower respiratory infections, measles, malaria, and pertussis. For this analysis, 199 new country-years of vital registration cause-of-death data, 5 country-years of surveillance data, 21 country-years of verbal autopsy data, and 94 country-years of other data types were added to those used in previous GBD rounds. FINDINGS The leading causes of age-standardised deaths globally were the same in 2019 as they were in 1990; in descending order, these were, ischaemic heart disease, stroke, chronic obstructive pulmonary disease, and lower respiratory infections. In 2021, however, COVID-19 replaced stroke as the second-leading age-standardised cause of death, with 94·0 deaths (95% UI 89·2-100·0) per 100 000 population. The COVID-19 pandemic shifted the rankings of the leading five causes, lowering stroke to the third-leading and chronic obstructive pulmonary disease to the fourth-leading position. In 2021, the highest age-standardised death rates from COVID-19 occurred in sub-Saharan Africa (271·0 deaths [250·1-290·7] per 100 000 population) and Latin America and the Caribbean (195·4 deaths [182·1-211·4] per 100 000 population). The lowest age-standardised death rates from COVID-19 were in the high-income super-region (48·1 deaths [47·4-48·8] per 100 000 population) and southeast Asia, east Asia, and Oceania (23·2 deaths [16·3-37·2] per 100 000 population). Globally, life expectancy steadily improved between 1990 and 2019 for 18 of the 22 investigated causes. Decomposition of global and regional life expectancy showed the positive effect that reductions in deaths from enteric infections, lower respiratory infections, stroke, and neonatal deaths, among others have contributed to improved survival over the study period. However, a net reduction of 1·6 years occurred in global life expectancy between 2019 and 2021, primarily due to increased death rates from COVID-19 and other pandemic-related mortality. Life expectancy was highly variable between super-regions over the study period, with southeast Asia, east Asia, and Oceania gaining 8·3 years (6·7-9·9) overall, while having the smallest reduction in life expectancy due to COVID-19 (0·4 years). The largest reduction in life expectancy due to COVID-19 occurred in Latin America and the Caribbean (3·6 years). Additionally, 53 of the 288 causes of death were highly concentrated in locations with less than 50% of the global population as of 2021, and these causes of death became progressively more concentrated since 1990, when only 44 causes showed this pattern. The concentration phenomenon is discussed heuristically with respect to enteric and lower respiratory infections, malaria, HIV/AIDS, neonatal disorders, tuberculosis, and measles. INTERPRETATION Long-standing gains in life expectancy and reductions in many of the leading causes of death have been disrupted by the COVID-19 pandemic, the adverse effects of which were spread unevenly among populations. Despite the pandemic, there has been continued progress in combatting several notable causes of death, leading to improved global life expectancy over the study period. Each of the seven GBD super-regions showed an overall improvement from 1990 and 2021, obscuring the negative effect in the years of the pandemic. Additionally, our findings regarding regional variation in causes of death driving increases in life expectancy hold clear policy utility. Analyses of shifting mortality trends reveal that several causes, once widespread globally, are now increasingly concentrated geographically. These changes in mortality concentration, alongside further investigation of changing risks, interventions, and relevant policy, present an important opportunity to deepen our understanding of mortality-reduction strategies. Examining patterns in mortality concentration might reveal areas where successful public health interventions have been implemented. Translating these successes to locations where certain causes of death remain entrenched can inform policies that work to improve life expectancy for people everywhere. FUNDING Bill & Melinda Gates Foundation
Recommended from our members
Global burden of 288 causes of death and life expectancy decomposition in 204 countries and territories and 811 subnational locations, 1990–2021: a systematic analysis for the Global Burden of Disease Study 2021
Background
Regular, detailed reporting on population health by underlying cause of death is fundamental for public health decision making. Cause-specific estimates of mortality and the subsequent effects on life expectancy worldwide are valuable metrics to gauge progress in reducing mortality rates. These estimates are particularly important following large-scale mortality spikes, such as the COVID-19 pandemic. When systematically analysed, mortality rates and life expectancy allow comparisons of the consequences of causes of death globally and over time, providing a nuanced understanding of the effect of these causes on global populations.
Methods
The Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2021 cause-of-death analysis estimated mortality and years of life lost (YLLs) from 288 causes of death by age-sex-location-year in 204 countries and territories and 811 subnational locations for each year from 1990 until 2021. The analysis used 56 604 data sources, including data from vital registration and verbal autopsy as well as surveys, censuses, surveillance systems, and cancer registries, among others. As with previous GBD rounds, cause-specific death rates for most causes were estimated using the Cause of Death Ensemble model—a modelling tool developed for GBD to assess the out-of-sample predictive validity of different statistical models and covariate permutations and combine those results to produce cause-specific mortality estimates—with alternative strategies adapted to model causes with insufficient data, substantial changes in reporting over the study period, or unusual epidemiology. YLLs were computed as the product of the number of deaths for each cause-age-sex-location-year and the standard life expectancy at each age. As part of the modelling process, uncertainty intervals (UIs) were generated using the 2·5th and 97·5th percentiles from a 1000-draw distribution for each metric. We decomposed life expectancy by cause of death, location, and year to show cause-specific effects on life expectancy from 1990 to 2021. We also used the coefficient of variation and the fraction of population affected by 90% of deaths to highlight concentrations of mortality. Findings are reported in counts and age-standardised rates. Methodological improvements for cause-of-death estimates in GBD 2021 include the expansion of under-5-years age group to include four new age groups, enhanced methods to account for stochastic variation of sparse data, and the inclusion of COVID-19 and other pandemic-related mortality—which includes excess mortality associated with the pandemic, excluding COVID-19, lower respiratory infections, measles, malaria, and pertussis. For this analysis, 199 new country-years of vital registration cause-of-death data, 5 country-years of surveillance data, 21 country-years of verbal autopsy data, and 94 country-years of other data types were added to those used in previous GBD rounds.
Findings
The leading causes of age-standardised deaths globally were the same in 2019 as they were in 1990; in descending order, these were, ischaemic heart disease, stroke, chronic obstructive pulmonary disease, and lower respiratory infections. In 2021, however, COVID-19 replaced stroke as the second-leading age-standardised cause of death, with 94·0 deaths (95% UI 89·2–100·0) per 100 000 population. The COVID-19 pandemic shifted the rankings of the leading five causes, lowering stroke to the third-leading and chronic obstructive pulmonary disease to the fourth-leading position. In 2021, the highest age-standardised death rates from COVID-19 occurred in sub-Saharan Africa (271·0 deaths [250·1–290·7] per 100 000 population) and Latin America and the Caribbean (195·4 deaths [182·1–211·4] per 100 000 population). The lowest age-standardised death rates from COVID-19 were in the high-income super-region (48·1 deaths [47·4–48·8] per 100 000 population) and southeast Asia, east Asia, and Oceania (23·2 deaths [16·3–37·2] per 100 000 population). Globally, life expectancy steadily improved between 1990 and 2019 for 18 of the 22 investigated causes. Decomposition of global and regional life expectancy showed the positive effect that reductions in deaths from enteric infections, lower respiratory infections, stroke, and neonatal deaths, among others have contributed to improved survival over the study period. However, a net reduction of 1·6 years occurred in global life expectancy between 2019 and 2021, primarily due to increased death rates from COVID-19 and other pandemic-related mortality. Life expectancy was highly variable between super-regions over the study period, with southeast Asia, east Asia, and Oceania gaining 8·3 years (6·7–9·9) overall, while having the smallest reduction in life expectancy due to COVID-19 (0·4 years). The largest reduction in life expectancy due to COVID-19 occurred in Latin America and the Caribbean (3·6 years). Additionally, 53 of the 288 causes of death were highly concentrated in locations with less than 50% of the global population as of 2021, and these causes of death became progressively more concentrated since 1990, when only 44 causes showed this pattern. The concentration phenomenon is discussed heuristically with respect to enteric and lower respiratory infections, malaria, HIV/AIDS, neonatal disorders, tuberculosis, and measles.
Interpretation
Long-standing gains in life expectancy and reductions in many of the leading causes of death have been disrupted by the COVID-19 pandemic, the adverse effects of which were spread unevenly among populations. Despite the pandemic, there has been continued progress in combatting several notable causes of death, leading to improved global life expectancy over the study period. Each of the seven GBD super-regions showed an overall improvement from 1990 and 2021, obscuring the negative effect in the years of the pandemic. Additionally, our findings regarding regional variation in causes of death driving increases in life expectancy hold clear policy utility. Analyses of shifting mortality trends reveal that several causes, once widespread globally, are now increasingly concentrated geographically. These changes in mortality concentration, alongside further investigation of changing risks, interventions, and relevant policy, present an important opportunity to deepen our understanding of mortality-reduction strategies. Examining patterns in mortality concentration might reveal areas where successful public health interventions have been implemented. Translating these successes to locations where certain causes of death remain entrenched can inform policies that work to improve life expectancy for people everywhere
Vermicompost quality and earthworm reproduction in different organic waste substrates
Purpose The present study aims to evaluate the changes in parameters affecting the quality of vermicompost produced by the earthworm 'Eisenia fetida' on different organic waste substrates using multivariate analysis, variance analysis, factor analysis and principal component analysis (PCA).Method A completely randomized design experiment was conducted with a 2 × 8 factorial arrangement of experimental and control treatments in triplicate per treatment. We investigated the growth and reproduction of earthworms and the characteristics of vermicompost produced on different organic wastes and residues represented by carrot pulp (C), potato peel (P), vegetables (V) and sawdust (S) blended with cattle manure (as the main substrate) at two levels of 100 and 150 grams of each treatment in two kilograms of manure. Chemical parameters include pH, electrical conductivity (EC), carbon/nitrogen ratio (C/N), phosphorus (P), potassium (K), calcium (Ca), magnesium (Mg), iron (Fe) and copper (Cu) were measured in the vermicompost produced.Results The results of cluster analysis and PCA grouped nine substrate combinations into three categories with similar qualitative characteristics. The first two principal components in PCA revealed that the major parameters responsible for the qualitative changes in the produced vermicompost were iron, copper, calcium, magnesium, potassium, phosphorus and nitrogen.Conclusion Current results suggested that the treatment CPVS and the treatment S (sawdust) provided the optimal conditions for the growth and reproduction of earthworms and the production of high-quality vermicompost
Defining the at risk patients for contrast induced nephropathy after coronary angiography; 24-h urine creatinine versus Cockcroft-Gault equation or serum creatinine level
Background: Definitions of chronic kidney disease (CKD) in many catheterization laboratories have relied on the serum creatinine (Scr) rather than glomerular filtration rate (GFR). Regarding that CKD is the primary predisposing factor for contrast induced nephropathy (CIN), we compared the sensitivity of calculated GFR by 24-h Urine creatinine with Cockcroft-Gault (CG) equation and Scr level to define at risk patients for CIN who were undergone coronary angiography (CAG). Materials and Methods: Two hundred fifty four subjects who were candidate for CAG and had normal creatinine level were enrolled. Before CAG, GFR was calculated from a 24-h urine collection, CG equation and a single Scr sample regarding to previously described protocol. Contrast volume used for each case <100 ml. CIN was defined as a 0.5 mg/dL or 25% elevation in the Scr. Results : CIN occurred in 10.6%. Baseline GFR, the volume of contrast agent, and diabetes were the independent risk factors for CIN. GFR was less than 60 ml/min/1.73 m2 in 28% and 23.2% of patients regarding to 24-h urine creatinine and CG equation, respectively. In CIN prediction, 24-h urine creatinine estimated GFR had 85.2%, 59.3% and CG equation GFR had 78.9%, 81.1% sensitivity and specificity, respectively. Conclusion: Although, GFR estimated by CG equation has less sensitivity than GFR calculated from 24-h creatinine in CIN probability, but it is better than Scr alone and because of cost-effectiveness and convenience using of this method, we suggest at least using CG equation for GFR calculation before CIN, especially in diabetic and/or older than 60 years cases
Prevalence of latent tuberculosis infection among tuberculosis laboratory workers in Iran
OBJECTIVES The risk of transmission of Mycobacterium tuberculosis from patients to health care workers (HCWs) is a neglected problem in many countries, including Iran. The aim of this study was to estimate the prevalence of latent tuberculosis (TB) infection (LTBI) among TB laboratory staff in Iran, and to elucidate the risk factors associated with LTBI. METHODS All TB laboratory staff (689 individuals) employed in the TB laboratories of 50 Iranian universities of medical sciences and a random sample consisting of 317 low-risk HCWs were included in this cross-sectional study. Participants with tuberculin skin test indurations of 10 mm or more were considered to have an LTBI. RESULTS The prevalence of LTBI among TB laboratory staff and low-risk HCWs was 24.83% (95% confidence interval [CI], 21.31 to 27.74%) and 14.82% (95% CI, 11.31 to 19.20%), respectively. No active TB cases were found in either group. After adjusting for potential confounders, TB laboratory staff were more likely to have an LTBI than low-risk HCWs (prevalence odds ratio, 2.06; 95% CI, 1.35 to 3.17). CONCLUSIONS This study showed that LTBI are an occupational health problem among TB laboratory staff in Iran. This study reinforces the need to design and implement simple, effective, and affordable TB infection control programs in TB laboratories in Iran
Comparison of the tuberculin skin test and the QuantiFERON-TB Gold test in detecting latent tuberculosis in health care workers in Iran
OBJECTIVES: The tuberculin skin test (TST) and the QuantiFERON-TB Gold test (QFT) are used to identify latent tuberculosis infections (LTBIs). The aim of this study was to determine the agreement between these two tests among health care workers in Iran. METHODS: This cross-sectional study included 177 tuberculosis (TB) laboratory staff and 67 non-TB staff. TST indurations of 10 mm or more were considered positive. The Student’s t-test and the chi-square test were used to compare the mean score and proportion of variables between the TB laboratory staff and the non-TB laboratory staff. Kappa statistics were used to evaluate the agreement between these tests, and logistic regression was used to assess the risk factors associated with positive results for each test. RESULTS: The prevalence of LTBIs according to both the QFT and the TST was 17% (95% confidence interval [CI], 12% to 21%) and 16% (95% CI, 11% to 21%), respectively. The agreement between the QFT and the TST was 77.46%, with a kappa of 0.19 (95% CI, 0.04 to 0.34). CONCLUSIONS: Although the prevalence of LTBI based on the QFT and the TST was not significantly different, the kappa statistic was low between these two tests for the detection of LTBIs