88 research outputs found

    Analysis of Social Media Discussions on (#)Diet by Blue, Red, and Swing States in the U.S

    Get PDF
    The relationship between political affiliations and diet-related discussions on social media has not been studied on a population level. This study used a cost- and -time effective framework to leverage, aggregate, and analyze data from social media. This paper enhances our understanding of diet-related discussions with respect to political orientations in U.S. states. This mixed methods study used computational methods to collect tweets containing “diet” or “#diet” shared in a year, identified tweets posted by U.S. Twitter users, disclosed topics of tweets, and compared democratic, republican, and swing states based on the weight of topics. A qualitative method was employed to code topics. We found 32 unique topics extracted from more than 800,000 tweets, including a wide range of themes, such as diet types and chronic conditions. Based on the comparative analysis of the topic weights, our results revealed a significant difference between democratic, republican, and swing states. The largest difference was detected between swing and democratic states, and the smallest difference was identified between swing and republican states. Our study provides initial insight on the association of potential political leanings with health (e.g., dietary behaviors). Our results show diet discussions differ depending on the political orientation of the state in which Twitter users reside. Understanding the correlation of dietary preferences based on political orientation can help develop targeted and effective health promotion, communication, and policymaking strategies

    The impact of a dedicated coronavirus disease 2019 primary angioplasty protocol on time components related to ST-segment elevation myocardial infarction management in a 24/7 primary percutaneous coronary intervention�capable hospital

    Get PDF
    Background Primary percutaneous coronary intervention (PPCI) as the treatment of choice for ST-segment elevation myocardial infarction (STEMI) should be rapidly performed. It is necessary to use preventive strategies during the coronavirus disease 2019 (COVID-19) outbreak, which is an ongoing global concern. However, critical times in STEMI management may be influenced by the implementation of infection control protocols. aims We aimed to investigate the impact of our dedicated COVID-19 PPCI protocol on time components related to STEMI care and catheterization laboratory personnel safety. A subendpoint analysis to compare patient outcomes at a median time of 70 days during the pandemic with those of patients treated in the preceding year was another objective of our study. methods Patients with STEMI who underwent PPCI were included in this study. Chest computed tomography (CT) and real-time reverse transcriptase�polymerase chain reaction (rRT-PCR) tests were performed in patients suspected of having COVID-19. A total of 178 patients admitted between February 29 and April 30, 2020 were compared with 146 patients admitted between March 1 and April 30, 2019. results Severe acute respiratory syndrome coronavirus 2 infection was confirmed by rRT-PCR in 7 cases. In 6 out of 7 patients, CT was indicative of COVID-19. There were no differences between the study groups regarding critical time intervals for reperfusion in STEMI. The 70-day mortality rate before and during the pandemic was 2.73 and 4.49, respectively (P = 0.4). conclusions The implementation of the dedicated COVID-19 PPCI protocol in patients with STEMI allowed us to achieve similar target times for reperfusion, short-term clinical outcomes, and staff safety as in the prepandemic era. Copyright by the Author(s), 2020

    Could the 2017 ILAE and the four-dimensional epilepsy classifications be merged to a new "Integrated Epilepsy Classification"?

    Get PDF
    Over the last few decades the ILAE classifications for seizures and epilepsies (ILAE-EC) have been updated repeatedly to reflect the substantial progress that has been made in diagnosis and understanding of the etiology of epilepsies and seizures and to correct some of the shortcomings of the terminology used by the original taxonomy from the 1980s. However, these proposals have not been universally accepted or used in routine clinical practice. During the same period, a separate classification known as the "Four-dimensional epilepsy classification" (4D-EC) was developed which includes a seizure classification based exclusively on ictal symptomatology, which has been tested and adapted over the years. The extensive arguments for and against these two classification systems made in the past have mainly focused on the shortcomings of each system, presuming that they are incompatible. As a further more detailed discussion of the differences seemed relatively unproductive, we here review and assess the concordance between these two approaches that has evolved over time, to consider whether a classification incorporating the best aspects of the two approaches is feasible. To facilitate further discussion in this direction we outline a concrete proposal showing how such a compromise could be accomplished, the "Integrated Epilepsy Classification". This consists of five categories derived to different degrees from both of the classification systems: 1) a "Headline" summarizing localization and etiology for the less specialized users, 2) "Seizure type(s)", 3) "Epilepsy type" (focal, generalized or unknown allowing to add the epilepsy syndrome if available), 4) "Etiology", and 5) "Comorbidities & patient preferences"

    Effect of COVID-19 medications on corrected QT interval and induction of torsade de pointes: Results of a multicenter national survey

    Get PDF
    Background: There are some data showing that repurposed drugs used for the Coronavirus disease-19 (COVID-19) have potential to increase the risk of QTc prolongation and torsade de pointes (TdP), and these arrhythmic side effects have not been adequately addressed in COVID-19 patients treated with these repurposed medications. Methods: This is the prospective study of 2403 patients hospitalised at 13 hospitals within the COVID-19 epicentres of the Iran. These patients were treated with chloroquine, hydroxychloroquine, lopinavir/ritonavir, atazanavir/ritonavir, oseltamivir, favipiravir and remdesivir alone or in combination with azithromycin. The primary outcome of the study was incidence of critical QTc prolongation, and secondary outcomes were incidences of TdP and death. Results: Of the 2403 patients, 2365 met inclusion criteria. The primary outcome of QTc � 500 ms and �QTc � 60 ms was observed in 11.2 and 17.6 of the patients, respectively. The secondary outcomes of TdP and death were reported in 0.38 and 9.8 of the patients, respectively. The risk of critical QT prolongation increased in the presence of female gender, history of heart failure, treatment with hydroxychloroquine, azithromycin combination therapy, simultaneous furosemide or beta-blocker therapy and acute renal or hepatic dysfunction. However, the risk of TdP was predicted by treatment with lopinavir-ritonavir, simultaneous amiodarone or furosemide administration and hypokalaemia during treatment. Conclusion: This cohort showed significant QTc prolongation with all COVID-19 medications studied, however, life-threatening arrhythmia of TdP occurred rarely. Among the repurposed drugs studied, hydroxychloroquine or lopinavir-ritonavir alone or in combination with azithromycin clearly demonstrated to increase the risk of critical QT prolongation and/or TdP. © 2021 John Wiley & Sons Ltd

    Dynamic temporary blood facility location-allocation during and post-disaster periods

    Get PDF
    The key objective of this study is to develop a tool (hybridization or integration of different techniques) for locating the temporary blood banks during and post-disaster conditions that could serve the hospitals with minimum response time. We have used temporary blood centers, which must be located in such a way that it is able to serve the demand of hospitals in nearby region within a shorter duration. We are locating the temporary blood centres for which we are minimizing the maximum distance with hospitals. We have used Tabu search heuristic method to calculate the optimal number of temporary blood centres considering cost components. In addition, we employ Bayesian belief network to prioritize the factors for locating the temporary blood facilities. Workability of our model and methodology is illustrated using a case study including blood centres and hospitals surrounding Jamshedpur city. Our results shows that at-least 6 temporary blood facilities are required to satisfy the demand of blood during and post-disaster periods in Jamshedpur. The results also show that that past disaster conditions, response time and convenience for access are the most important factors for locating the temporary blood facilities during and post-disaster periods

    Using Genetic Algorithms And An Indifference-Zone Ranking And Selection Procedure Under Common Random Numbers For Simulation Optimisation

    No full text
    Genetic algorithms (GAs) are one of the many optimisation methodologies that have been used in conjunction with simulation modelling. The most critical step with a GA is the assignment of the selective probabilities to the alternatives. Selective probabilities are assigned based on the alternatives estimated performances which are obtained using simulation. An accurate estimate should be obtained to reduce the number of cases in which the search is oriented towards the wrong direction. Furthermores, it is important to obtain this estimate without many replications. This study proposes a simulation optimisation methodology that combines the GA and an indifference-zone (IZ) ranking and selection procedure under common random numbers (CRN). By using an IZ procedure, a statistical guarantee can be made about the direction in which the search should progress as well as a statistical guarantee about the results from the search. Furthermore, using CRN significantly reduces the required number of replications. © 2012 Operational Research Society Ltd. All rights reserved

    Red cell distribution width and severe left ventricular dysfunction in ischemic heart failure

    No full text
    Objective: The red cell distribution width (RDW), a simple and widely available marker, has been linked with an increased risk of adverse outcomes in patients with heart failure (HF) and risk of death, and cardiovascular events in those with previous myocardial infarction, but its relation with the severity of left ventricular (LV) dysfunction is not fully investigated. The aim of this study was to assess the prognostic value of the RDW in post myocardial infarction patients with typical signs and symptoms of HF and with reduced LV ejection fraction (EF). Methods: Patients (n = 350) came from an ongoing registry of consecutive patients who admitted for ischemic heart disease at our center. All patients were followed up 1 year after the initial hospitalization by telephone interviews. The outcomes studied were mortality and hospitalization because of decompensated HF. Results: RDW-coeffcient of variation (express in percentage) was calculated from SD of mean corpuscular volume and mean corpuscular volume itself. Using logistic regression analysis, 3 variables consisting age, RDW level, and hemoglobin were identifed as independent predictors of severe LV dysfunction (LVEF <30). Levels of RDW were associated with the presence of severe LV dysfunction, with an accuracy of 61.4 (95 confdence interval: 56.2-66.4) and 66.9 (95 confdence interval: 61.8-71.6), using cut-off values of higher than 13.5 and 13.8, respectively. Conclusion: Our results suggest that elevated RDW may be used as a prognostic tool among HF patients with the documented myocardial infarction because it is an inexpensive, rapidly calculated test that is already routinely in use in practice. © 2016 Wolters Kluwer Health, Inc. All rights reserved

    Comparison of Glasgow-Blatchford score and full Rockall score systems to predict clinical outcomes in patients with upper gastrointestinal bleeding

    No full text
    Marjan Mokhtare,&nbsp;Vida Bozorgi, Shahram Agah,&nbsp;Mehdi Nikkhah,&nbsp;Amirhossein Faghihi,&nbsp;Amirhossein Boghratian,&nbsp;Neda Shalbaf,&nbsp;Abbas Khanlari,&nbsp;Hamidreza Seifmanesh Colorectal Research Center, Rasoul Akram Hospital, Tehran, Iran Background: Various risk scoring systems have been recently developed to predict clinical outcomes in patients with upper gastrointestinal bleeding (UGIB). The two commonly used scoring systems include full Rockall score (RS) and the Glasgow-Blatchford score (GBS). Bleeding scores were assessed in terms of prediction of clinical outcomes in patients with UGIB. Patients and methods: Two hundred patients (age &gt;18 years) with obvious symptoms of UGIB in the emergency department of Rasoul Akram Hospital were enrolled. Full RS and GBS were calculated. We followed the patients for records of rebleeding and 1-month mortality. A&nbsp;receiver operating characteristic curve by using areas under the curve (AUCs) was used to statistically identify the best cutoff point. Results: Eighteen patients were excluded from the study due to failure to follow-up. Rebleeding and mortality rate were 9.34% (n=17) and 11.53% (n=21), respectively. Regarding 1-month mortality, full RS was better than GBS (AUC, 0.648 versus 0.582; P=0.021). GBS was more accurate in terms of detecting transfusion need (AUC, 0.757 versus 0.528; P=0.001), rebleeding rate (AUC, 0.722 versus 0.520; P=0.002), intensive care unit admission rate (AUC, 0.648 versus 0.582; P=0.021), and endoscopic intervention rate (AUC, 0.771 versus 0.650; P&lt;0.001). Conclusion: We found the full RS system is better for 1-month mortality prediction while GBS system is better for prediction of other outcomes. Keywords: full Rockall score, Glasgow-Blatchford score, gastrointestinal bleeding, mortality, prognosi
    corecore