86 research outputs found

    Factors associated with public knowledge of and attitudes to dementia: A cross-sectional study

    Get PDF
    IntroductionDementia is a major public health concern but one that continues to be stigmatised. We examine lay knowledge of dementia and attitudes to people with dementia as potential precursors of public anxiety, focusing on the social characteristics associated with (a) the formation of these attitudes, and (b) the perception of the need for restriction and control for people with dementia.MethodsAnalysis of the 2014 Northern Ireland Life and Times survey, which included questions on knowledge of, attitudes to and personal experience with dementia. We used (a) latent class analysis and (b) logistic regression to examine factors associated with respondent attitudes towards dementia.ResultsRespondents (n = 1211) had relatively good general knowledge of dementia, but limited knowledge of specific risk factors. Negative perceptions of dementia were mitigated somewhat by personal contact. A high proportion of respondents felt that high levels of control were appropriate for people diagnosed with dementia, even at early stages of the disease.ConclusionPersonal antipathy to dementia was highly prevalent despite ongoing public campaigns to increase public awareness of developments in its prevention, treatment and consequent care pathways and hampering efforts to widen social inclusion. Fresh thinking and more resources may be needed to challenge persisting common misapprehension of the condition and the formation of entrenched stigma

    Development and exploitation of a controlled vocabulary in support of climate modelling

    Get PDF
    There are three key components for developing a metadata system: a container structure laying out the key semantic issues of interest and their relationships; an extensible controlled vocabulary providing possible content; and tools to create and manipulate that content. While metadata systems must allow users to enter their own information, the use of a controlled vocabulary both imposes consistency of definition and ensures comparability of the objects described. Here we describe the controlled vocabulary (CV) and metadata creation tool built by the METAFOR project for use in the context of describing the climate models, simulations and experiments of the fifth Coupled Model Intercomparison Project (CMIP5). The CV and resulting tool chain introduced here is designed for extensibility and reuse and should find applicability in many more projects

    A comparison of taxon co-occurrence patterns for macro- and microorganisms

    Get PDF
    We examine co-occurrence patterns of microorganisms to evaluate community assembly “rules.” We use methods previously applied to macroorganisms, both to evaluate their applicability to microorganisms and to allow comparison of co-occurrence patterns observed in microorganisms to those found in macroorganisms. We use a null model analysis of 124 incidence matrices from microbial communities, including bacteria, archaea, fungi, and algae, and we compare these results to previously published findings from a meta-analysis of almost 100 macroorganism data sets. We show that assemblages of microorganisms demonstrate nonrandom patterns of co-occurrence that are broadly similar to those found in assemblages of macroorganisms. These results suggest that some taxon co-occurrence patterns may be general characteristics of communities of organisms from all domains of life. We also find that co-occurrence in microbial communities does not vary among taxonomic groups or habitat types. However, we find that the degree of co-occurrence does vary among studies that use different methods to survey microbial communities. Finally, we discuss the potential effects of the undersampling of microbial communities on our results, as well as processes that may contribute to nonrandom patterns of co-occurrence in both macrobial and microbial communities such as competition, habitat filtering, historical effects, and neutral processes

    Comparison of automated interval measurements by widely used algorithms in digital electrocardiographs

    Get PDF
    Background: Automated measurements of electrocardiographic (ECG) intervals by current-generation digital electrocardiographs are critical to computer-based ECG diagnostic statements, to serial comparison of ECGs, and to epidemiological studies of ECG findings in populations. A previous study demonstrated generally small but often significant systematic differences among 4 algorithms widely used for automated ECG in the United States and that measurement differences could be related to the degree of abnormality of the underlying tracing. Since that publication, some algorithms have been adjusted, whereas other large manufacturers of automated ECGs have asked to participate in an extension of this comparison. Methods: Seven widely used automated algorithms for computer-based interpretation participated in this blinded study of 800 digitized ECGs provided by the Cardiac Safety Research Consortium. All tracings were different from the study of 4 algorithms reported in 2014, and the selected population was heavily weighted toward groups with known effects on the QT interval: included were 200 normal subjects, 200 normal subjects receiving moxifloxacin as part of an active control arm of thorough QT studies, 200 subjects with genetically proved long QT syndrome type 1 (LQT1), and 200 subjects with genetically proved long QT syndrome Type 2 (LQT2). Results: For the entire population of 800 subjects, pairwise differences between algorithms for each mean interval value were clinically small, even where statistically significant, ranging from 0.2 to 3.6 milliseconds for the PR interval, 0.1 to 8.1 milliseconds for QRS duration, and 0.1 to 9.3 milliseconds for QT interval. The mean value of all paired differences among algorithms was higher in the long QT groups than in normals for both QRS duration and QT intervals. Differences in mean QRS duration ranged from 0.2 to 13.3 milliseconds in the LQT1 subjects and from 0.2 to 11.0 milliseconds in the LQT2 subjects. Differences in measured QT duration (not corrected for heart rate) ranged from 0.2 to 10.5 milliseconds in the LQT1 subjects and from 0.9 to 12.8 milliseconds in the LQT2 subjects. Conclusions: Among current-generation computer-based electrocardiographs, clinically small but statistically significant differences exist between ECG interval measurements by individual algorithms. Measurement differences between algorithms for QRS duration and for QT interval are larger in long QT interval subjects than in normal subjects. Comparisons of population study norms should be aware of small systematic differences in interval measurements due to different algorithm methodologies, within-individual interval measurement comparisons should use comparable methods, and further attempts to harmonize interval measurement methodologies are warranted

    Health state utilities associated with attributes of treatments for hepatitis C

    Get PDF
    BACKGROUND: Cost-utility analyses are frequently conducted to compare treatments for hepatitis C, which are often associated with complex regimens and serious adverse events. Thus, the purpose of this study was to estimate the utility associated with treatment administration and adverse events of hepatitis C treatments. DESIGN: Health states were drafted based on literature review and clinician interviews. General population participants in the UK valued the health states in time trade-off (TTO) interviews with 10- and 1-year time horizons. The 14 health states described hepatitis C with variations in treatment regimen and adverse events. RESULTS: A total of 182 participants completed interviews (50 % female; mean age = 39.3 years). Utilities for health states describing treatment regimens without injections ranged from 0.80 (1 tablet) to 0.79 (7 tablets). Utilities for health states describing oral plus injectable regimens were 0.77 (7 tablets), 0.75 (12 tablets), and 0.71 (18 tablets). Addition of a weekly injection had a disutility of −0.02. A requirement to take medication with fatty food had a disutility of −0.04. Adverse events were associated with substantial disutilities: mild anemia, −0.12; severe anemia, −0.32; flu-like symptoms, −0.21; mild rash, −0.13; severe rash, −0.48; depression, −0.47. One-year TTO scores were similar to these 10-year values. CONCLUSIONS: Adverse events and greater treatment regimen complexity were associated with lower utility scores, suggesting a perceived decrease in quality of life beyond the impact of hepatitis C. The resulting utilities may be used in models estimating and comparing the value of treatments for hepatitis C. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s10198-014-0649-6) contains supplementary material, which is available to authorized users

    HIEv Data Curation application - eResearch Australia 2014 poster

    No full text
    <p>Poster presented at the "eResearch Australia 2014" conference outlining the implementation of HIEV, a data curation and management web application for research data generated at the Hawkesbury Institute for the Environment at the University of western Sydney.</p

    HIE Sample Tracker - eResearch Australia 2014 poster

    No full text
    <p>Poster presented at the "eResearch Australia 2014" conference outlining the implementation of a 'sample tracker' web application for the tracking and management of environmental samples at the Hawkesbury Institute for the Environment.</p

    COVID-19 and the blunders of our governments: long-run system failings aggravated by political choices

    No full text
    More urgently than ever we need an answer to the question posed by the late Mick Moran in The Political Quarterly nearly two decades ago: ‘if government now invests huge resources in trying to be smart why does it often act so dumb?’ We reflect on this question in the context of governmental responses to COVID-19 in four steps. First, we argue that blunders occur because of systematic weaknesses that stimulate poor policy choices. Second, we review and assess the performance of governments on COVID-19 across a range of advanced democracies. Third, in the light of these comparisons we argue that the UK system of governance has proved itself vulnerable to failure at the time when its citizens most needed it. Finally, we outline an agenda of reform that seeks to rectify structural weaknesses of that governance capacity

    Resetting the course for population health: evidence and recommendations to address stalled mortality improvements in Scotland and the rest of the UK

    No full text
    Mortality rates, and related indicators such as life expectancy, are important markers of the overall health of a population. We, and others, have previously reported the profound and deeply concerning changes to these indicators that have been seen in Scotland, and across the UK, since around 2012: a stalling in mortality improvements overall, increasing death rates among the most deprived communities, and a widening in inequalities. This report provides further detailed analysis and evidence of the mortality changes that have occurred. It critically appraises the evidence for a range of hypotheses that have been suggested as possible contributory factors. These include reduced improvements in cardiovascular disease; an increase in obesity; an increase in deaths from a range of causes including drug-related deaths, dementia and Alzheimer’s disease, flu, and weather and temperature extremes; demographic factors; and austerity policies. From the assessment of the evidence, it reports UK Government economic ‘austerity’ policies (implemented as cuts to public spending including social security and other vital services) as the most likely contributory cause. Finally, it outlines a total of 40 recommendations to address the crisis, targeted at UK, Scottish and the local level. These span macroeconomic policy, social security, work, taxation, public services, material needs, improved understanding, and social recovery from Covid-19
    corecore