80 research outputs found

    Cold gas accretion in galaxies

    Get PDF
    Evidence for the accretion of cold gas in galaxies has been rapidly accumulating in the past years. HI observations of galaxies and their environment have brought to light new facts and phenomena which are evidence of ongoing or recent accretion: 1) A large number of galaxies are accompanied by gas-rich dwarfs or are surrounded by HI cloud complexes, tails and filaments. It may be regarded as direct evidence of cold gas accretion in the local universe. It is probably the same kind of phenomenon of material infall as the stellar streams observed in the halos of our galaxy and M31. 2) Considerable amounts of extra-planar HI have been found in nearby spiral galaxies. While a large fraction of this gas is produced by galactic fountains, it is likely that a part of it is of extragalactic origin. 3) Spirals are known to have extended and warped outer layers of HI. It is not clear how these have formed, and how and for how long the warps can be sustained. Gas infall has been proposed as the origin. 4) The majority of galactic disks are lopsided in their morphology as well as in their kinematics. Also here recent accretion has been advocated as a possible cause. In our view, accretion takes place both through the arrival and merging of gas-rich satellites and through gas infall from the intergalactic medium (IGM). The infall may have observable effects on the disk such as bursts of star formation and lopsidedness. We infer a mean ``visible'' accretion rate of cold gas in galaxies of at least 0.2 Msol/yr. In order to reach the accretion rates needed to sustain the observed star formation (~1 Msol/yr), additional infall of large amounts of gas from the IGM seems to be required.Comment: To appear in Astronomy & Astrophysics Reviews. 34 pages. Full-resolution version available at http://www.astron.nl/~oosterlo/accretionRevie

    Prognostic factors for perceived recovery or functional improvement in non-specific low back pain: secondary analyses of three randomized clinical trials

    Get PDF
    The objective of this study was to report on secondary analyses of a merged trial dataset aimed at exploring the potential importance of patient factors associated with clinically relevant improvements in non-acute, non-specific low back pain (LBP). From 273 predominantly male army workers (mean age 39 ± 10.5 years, range 20–56 years, 4 women) with LBP who were recruited in three randomized clinical trials, baseline individual patient factors, pain-related factors, work-related psychosocial factors, and psychological factors were evaluated as potential prognostic variables in a short-term (post-treatment) and a long-term logistic regression model (6 months after treatment). We found one dominant prognostic factor for improvement directly after treatment as well as 6 months later: baseline functional disability, expressed in Roland–Morris Disability Questionnaire scores. Baseline fear of movement, expressed in Tampa Scale for Kinesiophobia scores, had also significant prognostic value for long-term improvement. Less strongly associated with the outcome, but also included in our final models, were supervisor social support and duration of complaints (short-term model), and co-worker social support and pain radiation (long-term model). Information about initial levels of functional disability and fear-avoidance behaviour can be of value in the treatment of patient populations with characteristics comparable to the current army study population (e.g., predominantly male, physically active, working, moderate but chronic back problems). Individuals at risk for poor long-term LBP recovery, i.e., individuals with high initial level of disability and prominent fear-avoidance behaviour, can be distinguished that may need additional cognitive-behavioural treatment

    Methods for the guideline-based development of quality indicators--a systematic review

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Quality indicators (QIs) are used in many healthcare settings to measure, compare, and improve quality of care. For the efficient development of high-quality QIs, rigorous, approved, and evidence-based development methods are needed. Clinical practice guidelines are a suitable source to derive QIs from, but no gold standard for guideline-based QI development exists. This review aims to identify, describe, and compare methodological approaches to guideline-based QI development.</p> <p>Methods</p> <p>We systematically searched medical literature databases (Medline, EMBASE, and CINAHL) and grey literature. Two researchers selected publications reporting methodological approaches to guideline-based QI development. In order to describe and compare methodological approaches used in these publications, we extracted detailed information on common steps of guideline-based QI development (topic selection, guideline selection, extraction of recommendations, QI selection, practice test, and implementation) to predesigned extraction tables.</p> <p>Results</p> <p>From 8,697 hits in the database search and several grey literature documents, we selected 48 relevant references. The studies were of heterogeneous type and quality. We found no randomized controlled trial or other studies comparing the ability of different methodological approaches to guideline-based development to generate high-quality QIs. The relevant publications featured a wide variety of methodological approaches to guideline-based QI development, especially regarding guideline selection and extraction of recommendations. Only a few studies reported patient involvement.</p> <p>Conclusions</p> <p>Further research is needed to determine which elements of the methodological approaches identified, described, and compared in this review are best suited to constitute a gold standard for guideline-based QI development. For this research, we provide a comprehensive groundwork.</p

    Value of risk scores in the decision to palliate patients with ruptured abdominal aortic aneurysm

    Get PDF
    Background: The aim of this study was to develop a 48-h mortality risk score, which included morphology data, for patients with ruptured abdominal aortic aneurysm presenting to an emergency department, and to assess its predictive accuracy and clinical effectiveness in triaging patients to immediate aneurysm repair, transfer or palliative care. Methods: Data from patients in the IMPROVE (Immediate Management of the Patient With Ruptured Aneurysm: Open Versus Endovascular Repair) randomized trial were used to develop the risk score. Variables considered included age, sex, haemodynamic markers and aortic morphology. Backwards selection was used to identify relevant predictors. Predictive performance was assessed using calibration plots and the C-statistic. Validation of the newly developed and other previously published scores was conducted in four external populations. The net benefit of treating patients based on a risk threshold compared with treating none was quantified. Results: Data from 536 patients in the IMPROVE trial were included. The final variables retained were age, sex, haemoglobin level, serum creatinine level, systolic BP, aortic neck length and angle, and acute myocardial ischaemia. The discrimination of the score for 48-h mortality in the IMPROVE data was reasonable (C-statistic 0·710, 95 per cent c.i. 0·659 to 0·760), but varied in external populations (from 0·652 to 0·761). The new score outperformed other published risk scores in some, but not all, populations. An 8 (95 per cent c.i. 5 to 11) per cent improvement in the C-statistic was estimated compared with using age alone. Conclusion: The assessed risk scores did not have sufficient accuracy to enable potentially life-saving decisions to be made regarding intervention. Focus should therefore shift to offering repair to more patients and reducing non-intervention rates, while respecting the wishes of the patient and family

    Developmental Profiles of Eczema, Wheeze, and Rhinitis: Two Population-Based Birth Cohort Studies

    Get PDF
    The term "atopic march" has been used to imply a natural progression of a cascade of symptoms from eczema to asthma and rhinitis through childhood. We hypothesize that this expression does not adequately describe the natural history of eczema, wheeze, and rhinitis during childhood. We propose that this paradigm arose from cross-sectional analyses of longitudinal studies, and may reflect a population pattern that may not predominate at the individual level.Data from 9,801 children in two population-based birth cohorts were used to determine individual profiles of eczema, wheeze, and rhinitis and whether the manifestations of these symptoms followed an atopic march pattern. Children were assessed at ages 1, 3, 5, 8, and 11 y. We used Bayesian machine learning methods to identify distinct latent classes based on individual profiles of eczema, wheeze, and rhinitis. This approach allowed us to identify groups of children with similar patterns of eczema, wheeze, and rhinitis over time. Using a latent disease profile model, the data were best described by eight latent classes: no disease (51.3%), atopic march (3.1%), persistent eczema and wheeze (2.7%), persistent eczema with later-onset rhinitis (4.7%), persistent wheeze with later-onset rhinitis (5.7%), transient wheeze (7.7%), eczema only (15.3%), and rhinitis only (9.6%). When latent variable modelling was carried out separately for the two cohorts, similar results were obtained. Highly concordant patterns of sensitisation were associated with different profiles of eczema, rhinitis, and wheeze. The main limitation of this study was the difference in wording of the questions used to ascertain the presence of eczema, wheeze, and rhinitis in the two cohorts.The developmental profiles of eczema, wheeze, and rhinitis are heterogeneous; only a small proportion of children (∼ 7% of those with symptoms) follow trajectory profiles resembling the atopic march. Please see later in the article for the Editors' Summary
    corecore