41 research outputs found

    Associations between Physical Activity and Obesity Defined by Waist-To-Height Ratio and Body Mass Index in the Korean Population

    Get PDF
    Objective This study investigated the associations between physical activity and the prevalence of obesity determined by waist-to-height ratio (WHtR) and body mass index (BMI). Methods This is the first study to our knowledge on physical activity and obesity using a nationally representative sample of South Korean population from The Korea National Health and Nutrition Examination Survey. We categorized individuals into either non-obese or obese defined by WHtR and BMI. Levels of moderate-to-vigorous physical activity were classified as ‘Inactive’, ‘Active’, and ‘Very active’ groups based on the World Health Organization physical activity guidelines. Multivariable logistic regression was used to examine the associations between physical activity and the prevalence of obesity. Results Physical activity was significantly associated with a lower prevalence of obesity using both WHtR and BMI. Compared to inactive men, odds ratios (ORs) (95% confidence intervals [CIs]) for obesity by WHtR ≥0.50 were 0.69 (0.53–0.89) in active men and 0.76 (0.63–0.91) in very active men (p for trend = 0.007). The ORs (95% CIs) for obesity by BMI ≥25 kg/m2 were 0.78 (0.59–1.03) in active men and 0.82 (0.67–0.99) in very active men (p for trend = 0.060). The ORs (95% CIs) for obesity by BMI ≥30 kg/m2 were 0.40 (0.15–0.98) in active men and 0.90 (0.52–1.56) in very active men (p for trend = 0.978). Compared to inactive women, the ORs (95% CIs) for obesity by WHtR ≥0.50 were 0.94 (0.75–1.18) in active women and 0.84 (0.71–0.998) in very active women (p for trend = 0.046). However, no significant associations were found between physical activity and obesity by BMI in women. Conclusions We found more significant associations between physical activity and obesity defined by WHtR than BMI. However, intervention studies are warranted to investigate and compare causal associations between physical activity and different obesity measures in various populations

    On-the-fly pipeline parallelism

    Get PDF
    Pipeline parallelism organizes a parallel program as a linear sequence of s stages. Each stage processes elements of a data stream, passing each processed data element to the next stage, and then taking on a new element before the subsequent stages have necessarily completed their processing. Pipeline parallelism is used especially in streaming applications that perform video, audio, and digital signal processing. Three out of 13 benchmarks in PARSEC, a popular software benchmark suite designed for shared-memory multiprocessors, can be expressed as pipeline parallelism. Whereas most concurrency platforms that support pipeline parallelism use a "construct-and-run" approach, this paper investigates "on-the-fly" pipeline parallelism, where the structure of the pipeline emerges as the program executes rather than being specified a priori. On-the-fly pipeline parallelism allows the number of stages to vary from iteration to iteration and dependencies to be data dependent. We propose simple linguistics for specifying on-the-fly pipeline parallelism and describe a provably efficient scheduling algorithm, the Piper algorithm, which integrates pipeline parallelism into a work-stealing scheduler, allowing pipeline and fork-join parallelism to be arbitrarily nested. The Piper algorithm automatically throttles the parallelism, precluding "runaway" pipelines. Given a pipeline computation with T[subscript 1] work and T[subscript ∞] span (critical-path length), Piper executes the computation on P processors in T[subscript P]≤ T[subscript 1]/P + O(T[subscript ∞] + lg P) expected time. Piper also limits stack space, ensuring that it does not grow unboundedly with running time. We have incorporated on-the-fly pipeline parallelism into a Cilk-based work-stealing runtime system. Our prototype Cilk-P implementation exploits optimizations such as lazy enabling and dependency folding. We have ported the three PARSEC benchmarks that exhibit pipeline parallelism to run on Cilk-P. One of these, x264, cannot readily be executed by systems that support only construct-and-run pipeline parallelism. Benchmark results indicate that Cilk-P has low serial overhead and good scalability. On x264, for example, Cilk-P exhibits a speedup of 13.87 over its respective serial counterpart when running on 16 processors.National Science Foundation (U.S.) (Grant CNS-1017058)National Science Foundation (U.S.) (Grant CCF-1162148)National Science Foundation (U.S.). Graduate Research Fellowshi

    Safe Open-Nested Transactions Through Ownership

    Get PDF
    Researchers in transactional memory (TM) have proposed open nesting asa methodology for increasing the concurrency of a program. The ideais to ignore certain "low-level" memory operations of anopen-nested transaction when detecting conflicts for its parenttransaction, and instead perform abstract concurrency control for the"high-level" operation that nested transaction represents. Tosupport this methodology, TM systems use an open-nested commitmechanism that commits all changes performed by an open-nestedtransaction directly to memory, thereby avoiding low-levelconflicts. Unfortunately, because the TM runtime is unaware of thedifferent levels of memory, an unconstrained use of open-nestedcommits can lead to anomalous program behavior.In this paper, we describe a framework of ownership-awaretransactional memory which incorporates the notion of modules into theTM system and requires that transactions and data be associated withspecific transactional modules or Xmodules. We propose a newownership-aware commit mechanism, a hybrid between anopen-nested and closed-nested commit which commits a piece of datadifferently depending on whether the current Xmodule owns the data ornot. Moreover, we give a set of precise constraints on interactionsand sharing of data among the Xmodules based on familiar notions ofabstraction. We prove that ownership-aware TM has has cleanmemory-level semantics and can guarantee serializability bymodules, which is an adaptation of multilevel serializability fromdatabases to TM. In addition, we describe how a programmer canspecify Xmodules and ownership in a Java-like language. Our typesystem can enforce most of the constraints required by ownership-awareTM statically, and can enforce the remaining constraints dynamically.Finally, we prove that if transactions in the process of aborting obeyrestrictions on their memory footprint, the OAT model is free fromsemantic deadlock

    Utilisation of an operative difficulty grading scale for laparoscopic cholecystectomy

    Get PDF
    Background A reliable system for grading operative difficulty of laparoscopic cholecystectomy would standardise description of findings and reporting of outcomes. The aim of this study was to validate a difficulty grading system (Nassar scale), testing its applicability and consistency in two large prospective datasets. Methods Patient and disease-related variables and 30-day outcomes were identified in two prospective cholecystectomy databases: the multi-centre prospective cohort of 8820 patients from the recent CholeS Study and the single-surgeon series containing 4089 patients. Operative data and patient outcomes were correlated with Nassar operative difficultly scale, using Kendall’s tau for dichotomous variables, or Jonckheere–Terpstra tests for continuous variables. A ROC curve analysis was performed, to quantify the predictive accuracy of the scale for each outcome, with continuous outcomes dichotomised, prior to analysis. Results A higher operative difficulty grade was consistently associated with worse outcomes for the patients in both the reference and CholeS cohorts. The median length of stay increased from 0 to 4 days, and the 30-day complication rate from 7.6 to 24.4% as the difficulty grade increased from 1 to 4/5 (both p < 0.001). In the CholeS cohort, a higher difficulty grade was found to be most strongly associated with conversion to open and 30-day mortality (AUROC = 0.903, 0.822, respectively). On multivariable analysis, the Nassar operative difficultly scale was found to be a significant independent predictor of operative duration, conversion to open surgery, 30-day complications and 30-day reintervention (all p < 0.001). Conclusion We have shown that an operative difficulty scale can standardise the description of operative findings by multiple grades of surgeons to facilitate audit, training assessment and research. It provides a tool for reporting operative findings, disease severity and technical difficulty and can be utilised in future research to reliably compare outcomes according to case mix and intra-operative difficulty

    The TOMMY trial: a comparison of TOMosynthesis with digital MammographY in the UK NHS Breast Screening Programme--a multicentre retrospective reading study comparing the diagnostic performance of digital breast tomosynthesis and digital mammography with digital mammography alone.

    Get PDF
    BACKGROUND: Digital breast tomosynthesis (DBT) is a three-dimensional mammography technique with the potential to improve accuracy by improving differentiation between malignant and non-malignant lesions. OBJECTIVES: The objectives of the study were to compare the diagnostic accuracy of DBT in conjunction with two-dimensional (2D) mammography or synthetic 2D mammography, against standard 2D mammography and to determine if DBT improves the accuracy of detection of different types of lesions. STUDY POPULATION: Women (aged 47-73 years) recalled for further assessment after routine breast screening and women (aged 40-49 years) with moderate/high of risk of developing breast cancer attending annual mammography screening were recruited after giving written informed consent. INTERVENTION: All participants underwent a two-view 2D mammography of both breasts and two-view DBT imaging. Image-processing software generated a synthetic 2D mammogram from the DBT data sets. RETROSPECTIVE READING STUDY: In an independent blinded retrospective study, readers reviewed (1) 2D or (2) 2D + DBT or (3) synthetic 2D + DBT images for each case without access to original screening mammograms or prior examinations. Sensitivities and specificities were calculated for each reading arm and by subgroup analyses. RESULTS: Data were available for 7060 subjects comprising 6020 (1158 cancers) assessment cases and 1040 (two cancers) family history screening cases. Overall sensitivity was 87% [95% confidence interval (CI) 85% to 89%] for 2D only, 89% (95% CI 87% to 91%) for 2D + DBT and 88% (95% CI 86% to 90%) for synthetic 2D + DBT. The difference in sensitivity between 2D and 2D + DBT was of borderline significance (p = 0.07) and for synthetic 2D + DBT there was no significant difference (p = 0.6). Specificity was 58% (95% CI 56% to 60%) for 2D, 69% (95% CI 67% to 71%) for 2D + DBT and 71% (95% CI 69% to 73%) for synthetic 2D + DBT. Specificity was significantly higher in both DBT reading arms for all subgroups of age, density and dominant radiological feature (p < 0.001 all cases). In all reading arms, specificity tended to be lower for microcalcifications and higher for distortion/asymmetry. Comparing 2D + DBT to 2D alone, sensitivity was significantly higher: 93% versus 86% (p < 0.001) for invasive tumours of size 11-20 mm. Similarly, for breast density 50% or more, sensitivities were 93% versus 86% (p = 0.03); for grade 2 invasive tumours, sensitivities were 91% versus 87% (p = 0.01); where the dominant radiological feature was a mass, sensitivities were 92% and 89% (p = 0.04) For synthetic 2D + DBT, there was significantly (p = 0.006) higher sensitivity than 2D alone in invasive cancers of size 11-20 mm, with a sensitivity of 91%. CONCLUSIONS: The specificity of DBT and 2D was better than 2D alone but there was only marginal improvement in sensitivity. The performance of synthetic 2D appeared to be comparable to standard 2D. If these results were observed with screening cases, DBT and 2D mammography could benefit to the screening programme by reducing the number of women recalled unnecessarily, especially if a synthetic 2D mammogram were used to minimise radiation exposure. Further research is required into the feasibility of implementing DBT in a screening setting, prognostic modelling on outcomes and mortality, and comparison of 2D and synthetic 2D for different lesion types. STUDY REGISTRATION: Current Controlled Trials ISRCTN73467396. FUNDING: This project was funded by the NIHR Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 19, No. 4. See the HTA programme website for further project information.This project was funded by the NIHR Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 19, No. 4. See the HTA programme website for further project information.Gilbert FJ, Tucker L, Gillan MGC, Willsher P, Cooke J, Duncan KA, et al. The TOMMY trial: a comparison of TOMosynthesis with digital MammographY in the UK NHS Breast Screening Programme – a multicentre retrospective reading study comparing the diagnostic performance of digital breast tomosynthesis and digital mammography with digital mammography alone. Health Technol Assess 2015;19(4). © Queen’s Printer and Controller of HMSO 2015. This work was produced by Gilbert et al. under the terms of a commissioning contract issued by the Secretary of State for Health. This issue may be freely reproduced for the purposes of private research and study and extracts (or indeed, the full report) may be included in professional journals provided that suitable acknowledgement is made and the reproduction is not associated with any form of advertising. Applications for commercial reproduction should be addressed to: NIHR Journals Library, National Institute for Health Research, Evaluation, Trials and Studies Coordinating Centre, Alpha House, University of Southampton Science Park, Southampton SO16 7NS, UK

    Population‐based cohort study of outcomes following cholecystectomy for benign gallbladder diseases

    Get PDF
    Background The aim was to describe the management of benign gallbladder disease and identify characteristics associated with all‐cause 30‐day readmissions and complications in a prospective population‐based cohort. Methods Data were collected on consecutive patients undergoing cholecystectomy in acute UK and Irish hospitals between 1 March and 1 May 2014. Potential explanatory variables influencing all‐cause 30‐day readmissions and complications were analysed by means of multilevel, multivariable logistic regression modelling using a two‐level hierarchical structure with patients (level 1) nested within hospitals (level 2). Results Data were collected on 8909 patients undergoing cholecystectomy from 167 hospitals. Some 1451 cholecystectomies (16·3 per cent) were performed as an emergency, 4165 (46·8 per cent) as elective operations, and 3293 patients (37·0 per cent) had had at least one previous emergency admission, but had surgery on a delayed basis. The readmission and complication rates at 30 days were 7·1 per cent (633 of 8909) and 10·8 per cent (962 of 8909) respectively. Both readmissions and complications were independently associated with increasing ASA fitness grade, duration of surgery, and increasing numbers of emergency admissions with gallbladder disease before cholecystectomy. No identifiable hospital characteristics were linked to readmissions and complications. Conclusion Readmissions and complications following cholecystectomy are common and associated with patient and disease characteristics

    The development and validation of a scoring tool to predict the operative duration of elective laparoscopic cholecystectomy

    Get PDF
    Background: The ability to accurately predict operative duration has the potential to optimise theatre efficiency and utilisation, thus reducing costs and increasing staff and patient satisfaction. With laparoscopic cholecystectomy being one of the most commonly performed procedures worldwide, a tool to predict operative duration could be extremely beneficial to healthcare organisations. Methods: Data collected from the CholeS study on patients undergoing cholecystectomy in UK and Irish hospitals between 04/2014 and 05/2014 were used to study operative duration. A multivariable binary logistic regression model was produced in order to identify significant independent predictors of long (> 90 min) operations. The resulting model was converted to a risk score, which was subsequently validated on second cohort of patients using ROC curves. Results: After exclusions, data were available for 7227 patients in the derivation (CholeS) cohort. The median operative duration was 60 min (interquartile range 45–85), with 17.7% of operations lasting longer than 90 min. Ten factors were found to be significant independent predictors of operative durations > 90 min, including ASA, age, previous surgical admissions, BMI, gallbladder wall thickness and CBD diameter. A risk score was then produced from these factors, and applied to a cohort of 2405 patients from a tertiary centre for external validation. This returned an area under the ROC curve of 0.708 (SE = 0.013, p  90 min increasing more than eightfold from 5.1 to 41.8% in the extremes of the score. Conclusion: The scoring tool produced in this study was found to be significantly predictive of long operative durations on validation in an external cohort. As such, the tool may have the potential to enable organisations to better organise theatre lists and deliver greater efficiencies in care

    Genetic mechanisms of critical illness in COVID-19.

    Get PDF
    Host-mediated lung inflammation is present1, and drives mortality2, in the critical illness caused by coronavirus disease 2019 (COVID-19). Host genetic variants associated with critical illness may identify mechanistic targets for therapeutic development3. Here we report the results of the GenOMICC (Genetics Of Mortality In Critical Care) genome-wide association study in 2,244 critically ill patients with COVID-19 from 208 UK intensive care units. We have identified and replicated the following new genome-wide significant associations: on chromosome 12q24.13 (rs10735079, P = 1.65 × 10-8) in a gene cluster that encodes antiviral restriction enzyme activators (OAS1, OAS2 and OAS3); on chromosome 19p13.2 (rs74956615, P = 2.3 × 10-8) near the gene that encodes tyrosine kinase 2 (TYK2); on chromosome 19p13.3 (rs2109069, P = 3.98 ×  10-12) within the gene that encodes dipeptidyl peptidase 9 (DPP9); and on chromosome 21q22.1 (rs2236757, P = 4.99 × 10-8) in the interferon receptor gene IFNAR2. We identified potential targets for repurposing of licensed medications: using Mendelian randomization, we found evidence that low expression of IFNAR2, or high expression of TYK2, are associated with life-threatening disease; and transcriptome-wide association in lung tissue revealed that high expression of the monocyte-macrophage chemotactic receptor CCR2 is associated with severe COVID-19. Our results identify robust genetic signals relating to key host antiviral defence mechanisms and mediators of inflammatory organ damage in COVID-19. Both mechanisms may be amenable to targeted treatment with existing drugs. However, large-scale randomized clinical trials will be essential before any change to clinical practice

    Whole-genome sequencing reveals host factors underlying critical COVID-19

    Get PDF
    Critical COVID-19 is caused by immune-mediated inflammatory lung injury. Host genetic variation influences the development of illness requiring critical care1 or hospitalization2–4 after infection with SARS-CoV-2. The GenOMICC (Genetics of Mortality in Critical Care) study enables the comparison of genomes from individuals who are critically ill with those of population controls to find underlying disease mechanisms. Here we use whole-genome sequencing in 7,491 critically ill individuals compared with 48,400 controls to discover and replicate 23 independent variants that significantly predispose to critical COVID-19. We identify 16 new independent associations, including variants within genes that are involved in interferon signalling (IL10RB and PLSCR1), leucocyte differentiation (BCL11A) and blood-type antigen secretor status (FUT2). Using transcriptome-wide association and colocalization to infer the effect of gene expression on disease severity, we find evidence that implicates multiple genes—including reduced expression of a membrane flippase (ATP11A), and increased expression of a mucin (MUC1)—in critical disease. Mendelian randomization provides evidence in support of causal roles for myeloid cell adhesion molecules (SELE, ICAM5 and CD209) and the coagulation factor F8, all of which are potentially druggable targets. Our results are broadly consistent with a multi-component model of COVID-19 pathophysiology, in which at least two distinct mechanisms can predispose to life-threatening disease: failure to control viral replication; or an enhanced tendency towards pulmonary inflammation and intravascular coagulation. We show that comparison between cases of critical illness and population controls is highly efficient for the detection of therapeutically relevant mechanisms of disease
    corecore