36 research outputs found

    Body weight and weight loss : are health messages reaching their target?

    Full text link
    Objective: To investigate lay peoples&rsquo; knowledge of health risks of overweight, accuracy of self-perception of body weight and perceived benefits of weight loss. Method: A nine item questionnaire was administered to a cross sectional survey of adults in metropolitan shopping centres, height and weight were measured. Results: Two hundred and nine (57% female) adults completed the survey. Thirty eight percent had a healthy BMI (18.5-24.9), 38% were overweight (BMI 25-29.9) and a further 22% were obese (BMI&gt;30). However only 46% perceived themselves &lsquo;overweight&rsquo;, 50% considered themselves &lsquo;just about right&rsquo; and 4% considered themselves &lsquo;underweight&rsquo;. Of those with a BMI of 25 or greater 28% considered their weight &lsquo;just about right&rsquo;. Over 80% thought &lsquo;being overweight&rsquo; was &lsquo;likely&rsquo; or &lsquo;very likely&rsquo; to be a risk factor for cardiovascular disease, hypertension, diabetes and stroke; however 20% of overweight or obese individuals did not think their health would improve if they lost weight. Conclusion: A significant proportion of overweight or obese individuals do not accurately perceive their body weight and do not recognise the health advantages of weight loss despite recognising excess body weight as a risk factor for chronic diseases. Implications: Increasing the awareness of an individual&rsquo;s BMI and promoting the benefits of modest weight loss maybe two underutilized strategies for population level weight control.<br /

    Change Readiness Factors influencing employees’ readiness for change within an organisation : A systematic review

    Get PDF
    Master's thesis Business Administration BE501 - University of Agder 2017External and internal factors are constantly forcing organisations to change; in order for organisations to survive and change successfully it is crucial to respond quickly. Readiness for change and actions undertaken in the implementation of change serve as key constructs for the success of a change effort. Readiness for change is well known as a tool for decreasing resistance to change, but exactly what factors will create this condition and in what order the steps must occur has been studied less extensively. The term readiness for change goes all the way back to Kurt Lewin’s (1951) three-step model, in which the first step, unfreezing, refers to the creation of change readiness. Armenakis, Harris & Mossholder (1993) later expanded this approach by making their own model for readiness for change called the ‘message’. The model is well-known in the field of readiness for change, and consist of five components; (a) discrepancy; (b) principal support; (c) self-efficacy; (d) appropriateness; and (e) personal valence. Change readiness or readiness for change can be defined as how the attitudes, beliefs, and intentions of an organisation’s members recognise the need for change as well as the organisation’s own capability to accomplish these changes (Armenakis et al., 1993). We chose to conduct a systematic review using a narrative synthesis approach. Our aim was to collect various studies and articles, both qualitative and quantitative, in order to extract evidence regarding the factors that have the biggest impact on readiness for change. We started by collecting 500 articles, and after going through several exclusion processes, we ended up with 26 articles. These 26 articles were then analysed and systematised in various tables. Results show that the factors of ‘the message’ (especially self-efficacy), transformational leadership, development climate, participation, trust in management, organisational justice, and commitment had the greatest impact on change readiness, both directly and indirectly. These results were also supported by the literature on change readiness. Further, we constructed a model to show the most efficient way of gaining successful change readiness within an organisation

    Generating ag-specific human regulatory T-cells by TCR gene transfer for the treatment of rheumatoid arthritis.

    Get PDF
    Rheumatoid Arthritis (RA) is a systemic autoimmune disease that develops when the immune system loses tolerance to self, resulting in effector cells erroneously causing joint damage. Despite advances in biological therapy there is currently no cure for RA. Regulatory T-cells (Tregs) can regulate a broad range of immune effector cells when specific antigens (Ag) activate them through their individual T cell receptors (TCR). Using retroviral gene transfer, Tregs can be modified to express a chosen disease-related TCR, producing an Ag-specific Treg population that can target and suppress inflammatory arthritis in murine models. In this study the aim was to validate a reproducible method of generating Ag-specific Tregs from human peripheral blood. The results of this research demonstrates the successful TCR transduction into isolated human Treg cells. This was achieved by completing the outlined objectives to optimise the transduction process and the isolation of Tregs from peripheral blood. By sorting on the CD4+CD25+CD127dim cell population, Tregs were isolated and the purity of the population demonstrated by the high percentage of FoxP3 expressing cells. The results further show that FoxP3 is maintained after the transduction process. It is now necessary to demonstrate the working functional capacity of the transduced Tregs using a robust antigen-specific suppression assay. With the future goal of transitioning this approach into a human clinical setting, the method requires further validation to prove a reproducible method of generating Ag-specific Tregs from human peripheral blood taken from RA patients. This has implications for the advancement for adoptive Treg therapy to treat RA and other autoimmune diseases and will highlight the necessity of considering treatments in the context of an inflammatory environment. It will be important to understand the nature of any defects in the Treg subsets and how these might be corrected

    Adolescents’ interactive electronic device use, sleep and mental health: a systematic review of prospective studies

    Get PDF
    Optimal sleep, both in terms of duration and quality, is important for adolescent health. However, young people's sleeping habits have worsened over recent years. Access to and use of interactive electronic devices (e.g., smartphones, tablets, portable gaming devices) and social media have become deep-rooted elements of adolescents’ lives and are associated with poor sleep. Additionally, there is evidence of increases in poor mental health and well-being disorders in adolescents; further linked to poor sleep. This review aimed to summarise the longitudinal and experimental evidence of the impact of device use on adolescents’ sleep and subsequent mental health. Nine electronic bibliographical databases were searched for this narrative systematic review in October 2022. Of 5779 identified unique records, 28 studies were selected for inclusion. A total of 26 studies examined the direct link between device use and sleep outcomes, and four reported the indirect link between device use and mental health, with sleep as a mediator. The methodological quality of the studies was generally poor. Results demonstrated that adverse implications of device use (i.e., overuse, problematic use, telepressure, and cyber-victimisation) impacted sleep quality and duration; however, relationships with other types of device use were unclear. A small but consistent body of evidence showed sleep mediates the relationship between device use and mental health and well-being in adolescents. Increasing our understanding of the complexities of device use, sleep, and mental health in adolescents are important contributions to the development of future interventions and guidelines to prevent or increase resilience to cyber-bullying and ensure adequate sleep

    Cerebral microbleeds and intracranial haemorrhage risk in patients anticoagulated for atrial fibrillation after acute ischaemic stroke or transient ischaemic attack (CROMIS-2):a multicentre observational cohort study

    Get PDF
    Background: Cerebral microbleeds are a potential neuroimaging biomarker of cerebral small vessel diseases that are prone to intracranial bleeding. We aimed to determine whether presence of cerebral microbleeds can identify patients at high risk of symptomatic intracranial haemorrhage when anticoagulated for atrial fibrillation after recent ischaemic stroke or transient ischaemic attack. Methods: Our observational, multicentre, prospective inception cohort study recruited adults aged 18 years or older from 79 hospitals in the UK and one in the Netherlands with atrial fibrillation and recent acute ischaemic stroke or transient ischaemic attack, treated with a vitamin K antagonist or direct oral anticoagulant, and followed up for 24 months using general practitioner and patient postal questionnaires, telephone interviews, hospital visits, and National Health Service digital data on hospital admissions or death. We excluded patients if they could not undergo MRI, had a definite contraindication to anticoagulation, or had previously received therapeutic anticoagulation. The primary outcome was symptomatic intracranial haemorrhage occurring at any time before the final follow-up at 24 months. The log-rank test was used to compare rates of intracranial haemorrhage between those with and without cerebral microbleeds. We developed two prediction models using Cox regression: first, including all predictors associated with intracranial haemorrhage at the 20% level in univariable analysis; and second, including cerebral microbleed presence and HAS-BLED score. We then compared these with the HAS-BLED score alone. This study is registered with ClinicalTrials.gov, number NCT02513316. Findings: Between Aug 4, 2011, and July 31, 2015, we recruited 1490 participants of whom follow-up data were available for 1447 (97%), over a mean period of 850 days (SD 373; 3366 patient-years). The symptomatic intracranial haemorrhage rate in patients with cerebral microbleeds was 9·8 per 1000 patient-years (95% CI 4·0–20·3) compared with 2·6 per 1000 patient-years (95% CI 1·1–5·4) in those without cerebral microbleeds (adjusted hazard ratio 3·67, 95% CI 1·27–10·60). Compared with the HAS-BLED score alone (C-index 0·41, 95% CI 0·29–0·53), models including cerebral microbleeds and HAS-BLED (0·66, 0·53–0·80) and cerebral microbleeds, diabetes, anticoagulant type, and HAS-BLED (0·74, 0·60–0·88) predicted symptomatic intracranial haemorrhage significantly better (difference in C-index 0·25, 95% CI 0·07–0·43, p=0·0065; and 0·33, 0·14–0·51, p=0·00059, respectively). Interpretation: In patients with atrial fibrillation anticoagulated after recent ischaemic stroke or transient ischaemic attack, cerebral microbleed presence is independently associated with symptomatic intracranial haemorrhage risk and could be used to inform anticoagulation decisions. Large-scale collaborative observational cohort analyses are needed to refine and validate intracranial haemorrhage risk scores incorporating cerebral microbleeds to identify patients at risk of net harm from oral anticoagulation. Funding: The Stroke Association and the British Heart Foundation

    Symptom-based stratification of patients with primary Sjögren's syndrome: multi-dimensional characterisation of international observational cohorts and reanalyses of randomised clinical trials

    Get PDF
    Background Heterogeneity is a major obstacle to developing effective treatments for patients with primary Sjögren's syndrome. We aimed to develop a robust method for stratification, exploiting heterogeneity in patient-reported symptoms, and to relate these differences to pathobiology and therapeutic response. Methods We did hierarchical cluster analysis using five common symptoms associated with primary Sjögren's syndrome (pain, fatigue, dryness, anxiety, and depression), followed by multinomial logistic regression to identify subgroups in the UK Primary Sjögren's Syndrome Registry (UKPSSR). We assessed clinical and biological differences between these subgroups, including transcriptional differences in peripheral blood. Patients from two independent validation cohorts in Norway and France were used to confirm patient stratification. Data from two phase 3 clinical trials were similarly stratified to assess the differences between subgroups in treatment response to hydroxychloroquine and rituximab. Findings In the UKPSSR cohort (n=608), we identified four subgroups: Low symptom burden (LSB), high symptom burden (HSB), dryness dominant with fatigue (DDF), and pain dominant with fatigue (PDF). Significant differences in peripheral blood lymphocyte counts, anti-SSA and anti-SSB antibody positivity, as well as serum IgG, κ-free light chain, β2-microglobulin, and CXCL13 concentrations were observed between these subgroups, along with differentially expressed transcriptomic modules in peripheral blood. Similar findings were observed in the independent validation cohorts (n=396). Reanalysis of trial data stratifying patients into these subgroups suggested a treatment effect with hydroxychloroquine in the HSB subgroup and with rituximab in the DDF subgroup compared with placebo. Interpretation Stratification on the basis of patient-reported symptoms of patients with primary Sjögren's syndrome revealed distinct pathobiological endotypes with distinct responses to immunomodulatory treatments. Our data have important implications for clinical management, trial design, and therapeutic development. Similar stratification approaches might be useful for patients with other chronic immune-mediated diseases. Funding UK Medical Research Council, British Sjogren's Syndrome Association, French Ministry of Health, Arthritis Research UK, Foundation for Research in Rheumatology

    Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19

    Get PDF
    IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19. Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19. DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 non–critically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022). INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (n = 257), ARB (n = 248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; n = 10), or no RAS inhibitor (control; n = 264) for up to 10 days. MAIN OUTCOMES AND MEASURES The primary outcome was organ support–free days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes. RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ support–free days among critically ill patients was 10 (–1 to 16) in the ACE inhibitor group (n = 231), 8 (–1 to 17) in the ARB group (n = 217), and 12 (0 to 17) in the control group (n = 231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ support–free days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively). CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes. TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570
    corecore