116 research outputs found

    Shape analysis of the corpus callosum of autistic and normal subjects in neuroimaging.

    Get PDF
    Early detection of human disease in today’s society can have an enormous impact on the severity of the disease that is manifested. Disease such as Autism and Dyslexia, which have no current cure or proven mechanism as to how they develop, can often have an adverse physical and physiological impact on the lifestyle of a human being. Although these disease are not fully curable, the severity handicaps that accompany them can be significantly reduced with the proper therapy, and thus the earlier that the disease is detected the faster therapy can be administered. The research in this thesis is an attempt at studying discriminatory shape measures of some brain structures that are known to carry changes from autistics to normal individuals. The focus will be on the corpus callosum. There has been considerable research done on the brain scans (MRI, CT) of autistic individuals vs. control (normal) individuals to observe any noticeable discrepancies through statistical analysis. The most common and powerful tool to analyze structures of the brain, once a specific region has been segmented, is using Registration to match like structures and record their error. The ICP algorithm (Iterative Closest Point) is commonly used to accomplish this task. Many techniques such as level sets and statistical methods can be used for segmentation. The Corpus Callosum (CC) and the cortical surface of the brain are currently where most Autism analysis is performed. It has been observed that the gyrification of the cortical surface is different in the two groups, and size as well as shape of the CC. An analysis approach for autism MRI is quite extensive and involves many steps. This thesis is limited to examination of shape measures of the CC that lend discrimination ability to distinguish between normal and autistic individuals from T1-weigheted MRI scans. We will examine two approaches for shape analysis, based on the traditional Fourier Descriptors (FD) method and shape registration (SR) using the procrustes technique. MRI scans of 22 autistic and 16 normal individuals are used to test the approaches developed in this thesis. We show that both FD and SR may be used to extract features to discriminate between the two populations with accuracy levels over 80% up to 100% depending on the technique

    Early Outcomes of Coronary Artery Bypass Grafting in Patients with Low Ejection Fraction

    Get PDF
    Background: Patients with low ejection fraction (EF) are at a higher risk for postoperative complications and mortality. Our objective was to assess the effect of low EF (<40%) on early clinical outcomes after coronary artery bypass grafting (CABG) and to determine the predictors of mortality. Methods: From June 2017 to February 2019, 170 consecutive patients underwent CABG. There were 120 patients with low EF (<40%; 37.49 ± 2.89%); 94 were men (78.3%), and the mean age was 55.83 ± 8.04 years. Fifty patients had normal EF (> 40; 57.90 ± 2.27 %), 41 were men (82.0%), and the mean age was 54.30 ± 7.01 years and used as a control group. Results: Overall 30-day mortality was 10/120 patients (8.3%). Factors associated with higher mortality were females ( 70.0% vs. 17.3%, P<0.001); older age (61.40 ± 7.01 vs. 55.32 ± 7.97 years, P=0.025); diabetes mellitus (100% vs. 51.8%; P=0.003); longer cardiopulmonary bypass time (148.70 ± 40.12 vs. 108.49 ± 36.89 min; P=0.012); longer cross clamp time (88.19 ± 31.94 vs.64.77 ± 22.67 min; P=0.049), longer total operative time (6.82 ± 1.03 vs 5.38 ± 0.95 hours; P=0.001); intra-aortic balloon pump (IABP) insertion (90.0% vs. 10.9%; P<0.001); intra-operative complications (60% vs. 1.8%, P<0.001); ventricular tachycardia and ventricular fibrillation (30% and 50% vs. 4.5% and 5.5% respectively; P=0.002 for both); myocardial infarction (70% vs 11.8%, P<0.001), and lower postoperative ejection fraction (21.46 ± 1.93 vs 40.30 ± 8.19 %, P<0.001). In patients with low EF, postoperative NYHA and CCS angina class have improved compared to the preoperative levels (1.50 ± 0.61 vs. 3.31 ± 0.56; p< 0.001 and 1.38 ± 0.52 vs. 3.11 ± 0.55; p< 0.001 respectively) Conclusion: Patients with low EF have a higher risk of morbidity and mortality; however, the clinical and echocardiographic parameters improve over time. Therefore, CABG remains a viable option in selected patients with low EF. Factors affecting our 30-days mortality were related to the severity of the disease

    Automatic Detection of 2D and 3D Lung Nodules in Chest Spiral CT Scans

    Get PDF
    Automatic detection of lung nodules is an important problem in computer analysis of chest radiographs. In this paper, we propose a novel algorithm for isolating lung abnormalities (nodules) from spiral chest low-dose CT (LDCT) scans. The proposed algorithm consists of three main steps. The first step isolates the lung nodules, arteries, veins, bronchi, and bronchioles from the surrounding anatomical structures. The second step detects lung nodules using deformable 3D and 2D templates describing typical geometry and gray-level distribution within the nodules of the same type. The detection combines the normalized cross-correlation template matching and a genetic optimization algorithm. The final step eliminates the false positive nodules (FPNs) using three features that robustly define the true lung nodules. Experiments with 200 CT data sets show that the proposed approach provided comparable results with respect to the experts

    Antimicrobial resistance among migrants in Europe: a systematic review and meta-analysis

    Get PDF
    BACKGROUND: Rates of antimicrobial resistance (AMR) are rising globally and there is concern that increased migration is contributing to the burden of antibiotic resistance in Europe. However, the effect of migration on the burden of AMR in Europe has not yet been comprehensively examined. Therefore, we did a systematic review and meta-analysis to identify and synthesise data for AMR carriage or infection in migrants to Europe to examine differences in patterns of AMR across migrant groups and in different settings. METHODS: For this systematic review and meta-analysis, we searched MEDLINE, Embase, PubMed, and Scopus with no language restrictions from Jan 1, 2000, to Jan 18, 2017, for primary data from observational studies reporting antibacterial resistance in common bacterial pathogens among migrants to 21 European Union-15 and European Economic Area countries. To be eligible for inclusion, studies had to report data on carriage or infection with laboratory-confirmed antibiotic-resistant organisms in migrant populations. We extracted data from eligible studies and assessed quality using piloted, standardised forms. We did not examine drug resistance in tuberculosis and excluded articles solely reporting on this parameter. We also excluded articles in which migrant status was determined by ethnicity, country of birth of participants' parents, or was not defined, and articles in which data were not disaggregated by migrant status. Outcomes were carriage of or infection with antibiotic-resistant organisms. We used random-effects models to calculate the pooled prevalence of each outcome. The study protocol is registered with PROSPERO, number CRD42016043681. FINDINGS: We identified 2274 articles, of which 23 observational studies reporting on antibiotic resistance in 2319 migrants were included. The pooled prevalence of any AMR carriage or AMR infection in migrants was 25·4% (95% CI 19·1-31·8; I2 =98%), including meticillin-resistant Staphylococcus aureus (7·8%, 4·8-10·7; I2 =92%) and antibiotic-resistant Gram-negative bacteria (27·2%, 17·6-36·8; I2 =94%). The pooled prevalence of any AMR carriage or infection was higher in refugees and asylum seekers (33·0%, 18·3-47·6; I2 =98%) than in other migrant groups (6·6%, 1·8-11·3; I2 =92%). The pooled prevalence of antibiotic-resistant organisms was slightly higher in high-migrant community settings (33·1%, 11·1-55·1; I2 =96%) than in migrants in hospitals (24·3%, 16·1-32·6; I2 =98%). We did not find evidence of high rates of transmission of AMR from migrant to host populations. INTERPRETATION: Migrants are exposed to conditions favouring the emergence of drug resistance during transit and in host countries in Europe. Increased antibiotic resistance among refugees and asylum seekers and in high-migrant community settings (such as refugee camps and detention facilities) highlights the need for improved living conditions, access to health care, and initiatives to facilitate detection of and appropriate high-quality treatment for antibiotic-resistant infections during transit and in host countries. Protocols for the prevention and control of infection and for antibiotic surveillance need to be integrated in all aspects of health care, which should be accessible for all migrant groups, and should target determinants of AMR before, during, and after migration. FUNDING: UK National Institute for Health Research Imperial Biomedical Research Centre, Imperial College Healthcare Charity, the Wellcome Trust, and UK National Institute for Health Research Health Protection Research Unit in Healthcare-associated Infections and Antimictobial Resistance at Imperial College London

    Burnout among surgeons before and during the SARS-CoV-2 pandemic: an international survey

    Get PDF
    Background: SARS-CoV-2 pandemic has had many significant impacts within the surgical realm, and surgeons have been obligated to reconsider almost every aspect of daily clinical practice. Methods: This is a cross-sectional study reported in compliance with the CHERRIES guidelines and conducted through an online platform from June 14th to July 15th, 2020. The primary outcome was the burden of burnout during the pandemic indicated by the validated Shirom-Melamed Burnout Measure. Results: Nine hundred fifty-four surgeons completed the survey. The median length of practice was 10 years; 78.2% included were male with a median age of 37 years old, 39.5% were consultants, 68.9% were general surgeons, and 55.7% were affiliated with an academic institution. Overall, there was a significant increase in the mean burnout score during the pandemic; longer years of practice and older age were significantly associated with less burnout. There were significant reductions in the median number of outpatient visits, operated cases, on-call hours, emergency visits, and research work, so, 48.2% of respondents felt that the training resources were insufficient. The majority (81.3%) of respondents reported that their hospitals were included in the management of COVID-19, 66.5% felt their roles had been minimized; 41% were asked to assist in non-surgical medical practices, and 37.6% of respondents were included in COVID-19 management. Conclusions: There was a significant burnout among trainees. Almost all aspects of clinical and research activities were affected with a significant reduction in the volume of research, outpatient clinic visits, surgical procedures, on-call hours, and emergency cases hindering the training. Trial registration: The study was registered on clicaltrials.gov "NCT04433286" on 16/06/2020

    Surgical site infection after gastrointestinal surgery in high-income, middle-income, and low-income countries: a prospective, international, multicentre cohort study

    Get PDF
    Background: Surgical site infection (SSI) is one of the most common infections associated with health care, but its importance as a global health priority is not fully understood. We quantified the burden of SSI after gastrointestinal surgery in countries in all parts of the world. Methods: This international, prospective, multicentre cohort study included consecutive patients undergoing elective or emergency gastrointestinal resection within 2-week time periods at any health-care facility in any country. Countries with participating centres were stratified into high-income, middle-income, and low-income groups according to the UN's Human Development Index (HDI). Data variables from the GlobalSurg 1 study and other studies that have been found to affect the likelihood of SSI were entered into risk adjustment models. The primary outcome measure was the 30-day SSI incidence (defined by US Centers for Disease Control and Prevention criteria for superficial and deep incisional SSI). Relationships with explanatory variables were examined using Bayesian multilevel logistic regression models. This trial is registered with ClinicalTrials.gov, number NCT02662231. Findings: Between Jan 4, 2016, and July 31, 2016, 13 265 records were submitted for analysis. 12 539 patients from 343 hospitals in 66 countries were included. 7339 (58·5%) patient were from high-HDI countries (193 hospitals in 30 countries), 3918 (31·2%) patients were from middle-HDI countries (82 hospitals in 18 countries), and 1282 (10·2%) patients were from low-HDI countries (68 hospitals in 18 countries). In total, 1538 (12·3%) patients had SSI within 30 days of surgery. The incidence of SSI varied between countries with high (691 [9·4%] of 7339 patients), middle (549 [14·0%] of 3918 patients), and low (298 [23·2%] of 1282) HDI (p < 0·001). The highest SSI incidence in each HDI group was after dirty surgery (102 [17·8%] of 574 patients in high-HDI countries; 74 [31·4%] of 236 patients in middle-HDI countries; 72 [39·8%] of 181 patients in low-HDI countries). Following risk factor adjustment, patients in low-HDI countries were at greatest risk of SSI (adjusted odds ratio 1·60, 95% credible interval 1·05–2·37; p=0·030). 132 (21·6%) of 610 patients with an SSI and a microbiology culture result had an infection that was resistant to the prophylactic antibiotic used. Resistant infections were detected in 49 (16·6%) of 295 patients in high-HDI countries, in 37 (19·8%) of 187 patients in middle-HDI countries, and in 46 (35·9%) of 128 patients in low-HDI countries (p < 0·001). Interpretation: Countries with a low HDI carry a disproportionately greater burden of SSI than countries with a middle or high HDI and might have higher rates of antibiotic resistance. In view of WHO recommendations on SSI prevention that highlight the absence of high-quality interventional research, urgent, pragmatic, randomised trials based in LMICs are needed to assess measures aiming to reduce this preventable complication

    Impact of opioid-free analgesia on pain severity and patient satisfaction after discharge from surgery: multispecialty, prospective cohort study in 25 countries

    Get PDF
    Background: Balancing opioid stewardship and the need for adequate analgesia following discharge after surgery is challenging. This study aimed to compare the outcomes for patients discharged with opioid versus opioid-free analgesia after common surgical procedures.Methods: This international, multicentre, prospective cohort study collected data from patients undergoing common acute and elective general surgical, urological, gynaecological, and orthopaedic procedures. The primary outcomes were patient-reported time in severe pain measured on a numerical analogue scale from 0 to 100% and patient-reported satisfaction with pain relief during the first week following discharge. Data were collected by in-hospital chart review and patient telephone interview 1 week after discharge.Results: The study recruited 4273 patients from 144 centres in 25 countries; 1311 patients (30.7%) were prescribed opioid analgesia at discharge. Patients reported being in severe pain for 10 (i.q.r. 1-30)% of the first week after discharge and rated satisfaction with analgesia as 90 (i.q.r. 80-100) of 100. After adjustment for confounders, opioid analgesia on discharge was independently associated with increased pain severity (risk ratio 1.52, 95% c.i. 1.31 to 1.76; P &lt; 0.001) and re-presentation to healthcare providers owing to side-effects of medication (OR 2.38, 95% c.i. 1.36 to 4.17; P = 0.004), but not with satisfaction with analgesia (beta coefficient 0.92, 95% c.i. -1.52 to 3.36; P = 0.468) compared with opioid-free analgesia. Although opioid prescribing varied greatly between high-income and low- and middle-income countries, patient-reported outcomes did not.Conclusion: Opioid analgesia prescription on surgical discharge is associated with a higher risk of re-presentation owing to side-effects of medication and increased patient-reported pain, but not with changes in patient-reported satisfaction. Opioid-free discharge analgesia should be adopted routinely

    Measuring performance on the Healthcare Access and Quality Index for 195 countries and territories and selected subnational locations: A systematic analysis from the Global Burden of Disease Study 2016

    Get PDF
    Background A key component of achieving universal health coverage is ensuring that all populations have access to quality health care. Examining where gains have occurred or progress has faltered across and within countries is crucial to guiding decisions and strategies for future improvement. We used the Global Burden of Diseases, Injuries, and Risk Factors Study 2016 (GBD 2016) to assess personal health-care access and quality with the Healthcare Access and Quality (HAQ) Index for 195 countries and territories, as well as subnational locations in seven countries, from 1990 to 2016. Methods Drawing from established methods and updated estimates from GBD 2016, we used 32 causes from which death should not occur in the presence of effective care to approximate personal health-care access and quality by location and over time. To better isolate potential effects of personal health-care access and quality from underlying risk factor patterns, we risk-standardised cause-specific deaths due to non-cancers by location-year, replacing the local joint exposure of environmental and behavioural risks with the global level of exposure. Supported by the expansion of cancer registry data in GBD 2016, we used mortality-to-incidence ratios for cancers instead of risk-standardised death rates to provide a stronger signal of the effects of personal health care and access on cancer survival. We transformed each cause to a scale of 0–100, with 0 as the first percentile (worst) observed between 1990 and 2016, and 100 as the 99th percentile (best); we set these thresholds at the country level, and then applied them to subnational locations. We applied a principal components analysis to construct the HAQ Index using all scaled cause values, providing an overall score of 0–100 of personal health-care access and quality by location over time. We then compared HAQ Index levels and trends by quintiles on the Socio-demographic Index (SDI), a summary measure of overall development. As derived from the broader GBD study and other data sources, we examined relationships between national HAQ Index scores and potential correlates of performance, such as total health spending per capita. Findings In 2016, HAQ Index performance spanned from a high of 97·1 (95% UI 95·8–98·1) in Iceland, followed by 96·6 (94·9–97·9) in Norway and 96·1 (94·5–97·3) in the Netherlands, to values as low as 18·6 (13·1–24·4) in the Central African Republic, 19·0 (14·3–23·7) in Somalia, and 23·4 (20·2–26·8) in Guinea-Bissau. The pace of progress achieved between 1990 and 2016 varied, with markedly faster improvements occurring between 2000 and 2016 for many countries in sub-Saharan Africa and southeast Asia, whereas several countries in Latin America and elsewhere saw progress stagnate after experiencing considerable advances in the HAQ Index between 1990 and 2000. Striking subnational disparities emerged in personal health-care access and quality, with China and India having particularly large gaps between locations with the highest and lowest scores in 2016. In China, performance ranged from 91·5 (89·1–93·6) in Beijing to 48·0 (43·4–53·2) in Tibet (a 43·5-point difference), while India saw a 30·8-point disparity, from 64·8 (59·6–68·8) in Goa to 34·0 (30·3–38·1) in Assam. Japan recorded the smallest range in subnational HAQ performance in 2016 (a 4·8-point difference), whereas differences between subnational locations with the highest and lowest HAQ Index values were more than two times as high for the USA and three times as high for England. State-level gaps in the HAQ Index in Mexico somewhat narrowed from 1990 to 2016 (from a 20·9-point to 17·0-point difference), whereas in Brazil, disparities slightly increased across states during this time (a 17·2-point to 20·4-point difference). Performance on the HAQ Index showed strong linkages to overall development, with high and high-middle SDI countries generally having higher scores and faster gains for non-communicable diseases. Nonetheless, countries across the development spectrum saw substantial gains in some key health service areas from 2000 to 2016, most notably vaccine-preventable diseases. Overall, national performance on the HAQ Index was positively associated with higher levels of total health spending per capita, as well as health systems inputs, but these relationships were quite heterogeneous, particularly among low-to-middle SDI countries. Interpretation GBD 2016 provides a more detailed understanding of past success and current challenges in improving personal health-care access and quality worldwide. Despite substantial gains since 2000, many low-SDI and middle- SDI countries face considerable challenges unless heightened policy action and investments focus on advancing access to and quality of health care across key health services, especially non-communicable diseases. Stagnating or minimal improvements experienced by several low-middle to high-middle SDI countries could reflect the complexities of re-orienting both primary and secondary health-care services beyond the more limited foci of the Millennium Development Goals. Alongside initiatives to strengthen public health programmes, the pursuit of universal health coverage hinges upon improving both access and quality worldwide, and thus requires adopting a more comprehensive view—and subsequent provision—of quality health care for all populations.info:eu-repo/semantics/publishedVersio
    corecore