986 research outputs found

    Are people who participate in cultural activities more satisfied with life?

    Get PDF
    The influence of various aspects of life on wellbeing has been extensively researched. However, despite little empirical evidence, participation in leisure activities has been assumed to increase subjective wellbeing. Leisure is important because it is more under personal control than other sources of life satisfaction. This study asked whether people who participate in cultural leisure activities have higher life satisfaction than people who do not, if different types of leisure have the same influence on life satisfaction and if satisfaction is dependent on the frequency of participation or the number of activities undertaken. It used data from UKHLS Survey to establish associations between type, number and frequency of participation in leisure activities and life satisfaction. Results showed an independent and positive association of participation in sport, heritage and active-creative leisure activities and life satisfaction but not for participation in popular entertainment, theatre hobbies and museum/galleries. The association of reading hobbies and sedentary-creative activities and life satisfaction was negative. High life satisfaction was associated with engaging in a number of different activities rather than the frequency of participation in each of them. The results have implications for policy makers and leisure services providers, in particular those associated with heritage recreation. Subjective wellbeing measures, such as life satisfaction, and not economic measures alone should be considered in the evaluation of services. The promotion of leisure activities which are active and promote social interaction should be considered in programmes aimed at improving the quality of life

    The effects of CO2, climate and land-use on terrestrial carbon balance, 1920-1992: An analysis with four process-based ecosystem models

    Get PDF
    The concurrent effects of increasing atmospheric CO2 concentration, climate variability, and cropland establishment and abandonment on terrestrial carbon storage between 1920 and 1992 were assessed using a standard simulation protocol with four process-based terrestrial biosphere models. Over the long-term(1920–1992), the simulations yielded a time history of terrestrial uptake that is consistent (within the uncertainty) with a long-term analysis based on ice core and atmospheric CO2 data. Up to 1958, three of four analyses indicated a net release of carbon from terrestrial ecosystems to the atmosphere caused by cropland establishment. After 1958, all analyses indicate a net uptake of carbon by terrestrial ecosystems, primarily because of the physiological effects of rapidly rising atmospheric CO2. During the 1980s the simulations indicate that terrestrial ecosystems stored between 0.3 and 1.5 Pg C yr−1, which is within the uncertainty of analysis based on CO2 and O2 budgets. Three of the four models indicated (in accordance with O2 evidence) that the tropics were approximately neutral while a net sink existed in ecosystems north of the tropics. Although all of the models agree that the long-term effect of climate on carbon storage has been small relative to the effects of increasing atmospheric CO2 and land use, the models disagree as to whether climate variability and change in the twentieth century has promoted carbon storage or release. Simulated interannual variability from 1958 generally reproduced the El Niño/Southern Oscillation (ENSO)-scale variability in the atmospheric CO2 increase, but there were substantial differences in the magnitude of interannual variability simulated by the models. The analysis of the ability of the models to simulate the changing amplitude of the seasonal cycle of atmospheric CO2 suggested that the observed trend may be a consequence of CO2 effects, climate variability, land use changes, or a combination of these effects. The next steps for improving the process-based simulation of historical terrestrial carbon include (1) the transfer of insight gained from stand-level process studies to improve the sensitivity of simulated carbon storage responses to changes in CO2 and climate, (2) improvements in the data sets used to drive the models so that they incorporate the timing, extent, and types of major disturbances, (3) the enhancement of the models so that they consider major crop types and management schemes, (4) development of data sets that identify the spatial extent of major crop types and management schemes through time, and (5) the consideration of the effects of anthropogenic nitrogen deposition. The evaluation of the performance of the models in the context of a more complete consideration of the factors influencing historical terrestrial carbon dynamics is important for reducing uncertainties in representing the role of terrestrial ecosystems in future projections of the Earth system

    Non-parametric modeling of the intra-cluster gas using APEX-SZ bolometer imaging data

    Get PDF
    We demonstrate the usability of mm-wavelength imaging data obtained from the APEX-SZ bolometer array to derive the radial temperature profile of the hot intra-cluster gas out to radius r_500 and beyond. The goal is to study the physical properties of the intra-cluster gas by using a non-parametric de-projection method that is, aside from the assumption of spherical symmetry, free from modeling bias. We use publicly available X-ray imaging data from the XMM-Newton observatory and our Sunyaev-Zel'dovich Effect (SZE) imaging data from the APEX-SZ experiment at 150 GHz to de-project the density and temperature profiles for the relaxed cluster Abell 2204. We derive the gas density, temperature and entropy profiles assuming spherical symmetry, and obtain the total mass profile under the assumption of hydrostatic equilibrium. For comparison with X-ray spectroscopic temperature models, a re-analysis of the recent Chandra observation is done with the latest calibration updates. Using the non-parametric modeling we demonstrate a decrease of gas temperature in the cluster outskirts, and also measure the gas entropy profile. These results are obtained for the first time independently of X-ray spectroscopy, using SZE and X-ray imaging data. The contribution of the SZE systematic uncertainties in measuring T_e at large radii is shown to be small compared to the Chandra systematic spectroscopic errors. The upper limit on M_200 derived from the non-parametric method is consistent with the NFW model prediction from weak lensing analysis.Comment: Replaced with the published version; A&A 519, A29 (2010

    MIDA boronates are hydrolysed fast and slow by two different mechanisms

    Get PDF
    MIDA boronates (N-methylimidodiacetic boronic acid esters) serve as an increasingly general platform for small-molecule construction based on building blocks, largely because of the dramatic and general rate differences with which they are hydrolysed under various basic conditions. Yet the mechanistic underpinnings of these rate differences have remained unclear, which has hindered efforts to address the current limitations of this chemistry. Here we show that there are two distinct mechanisms for this hydrolysis: one is base mediated and the other neutral. The former can proceed more than three orders of magnitude faster than the latter, and involves a rate-limiting attack by a hydroxide at a MIDA carbonyl carbon. The alternative 'neutral' hydrolysis does not require an exogenous acid or base and involves rate-limiting B-N bond cleavage by a small water cluster, (H2O)n. The two mechanisms can operate in parallel, and their relative rates are readily quantified by (18)O incorporation. Whether hydrolysis is 'fast' or 'slow' is dictated by the pH, the water activity and the mass-transfer rates between phases. These findings stand to enable, in a rational way, an even more effective and widespread utilization of MIDA boronates in synthesis

    MAGE-C2/CT10 Protein Expression Is an Independent Predictor of Recurrence in Prostate Cancer

    Get PDF
    The cancer-testis (CT) family of antigens is expressed in a variety of malignant neoplasms. In most cases, no CT antigen is found in normal tissues, except in testis, making them ideal targets for cancer immunotherapy. A comprehensive analysis of CT antigen expression has not yet been reported in prostate cancer. MAGE-C2/CT-10 is a novel CT antigen. The objective of this study was to analyze extent and prognostic significance of MAGE-C2/CT10 protein expression in prostate cancer. 348 prostate carcinomas from consecutive radical prostatectomies, 29 castration-refractory prostate cancer, 46 metastases, and 45 benign hyperplasias were immunohistochemically analyzed for MAGE-C2/CT10 expression using tissue microarrays. Nuclear MAGE-C2/CT10 expression was identified in only 3.3% primary prostate carcinomas. MAGE-C2/CT10 protein expression was significantly more frequent in metastatic (16.3% positivity) and castration-resistant prostate cancer (17% positivity; p<0.001). Nuclear MAGE-C2/CT10 expression was identified as predictor of biochemical recurrence after radical prostatectomy (p = 0.015), which was independent of preoperative PSA, Gleason score, tumor stage, and surgical margin status in multivariate analysis (p<0.05). MAGE-C2/CT10 expression in prostate cancer correlates with the degree of malignancy and indicates a higher risk for biochemical recurrence after radical prostatectomy. Further, the results suggest MAGE-C2/CT10 as a potential target for adjuvant and palliative immunotherapy in patients with prostate cancer

    What about N? A methodological study of sample-size reporting in focus group studies

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Focus group studies are increasingly published in health related journals, but we know little about how researchers use this method, particularly how they determine the number of focus groups to conduct. The methodological literature commonly advises researchers to follow principles of data saturation, although practical advise on how to do this is lacking. Our objectives were firstly, to describe the current status of sample size in focus group studies reported in health journals. Secondly, to assess whether and how researchers explain the number of focus groups they carry out.</p> <p>Methods</p> <p>We searched PubMed for studies that had used focus groups and that had been published in open access journals during 2008, and extracted data on the number of focus groups and on any explanation authors gave for this number. We also did a qualitative assessment of the papers with regard to how number of groups was explained and discussed.</p> <p>Results</p> <p>We identified 220 papers published in 117 journals. In these papers insufficient reporting of sample sizes was common. The number of focus groups conducted varied greatly (mean 8.4, median 5, range 1 to 96). Thirty seven (17%) studies attempted to explain the number of groups. Six studies referred to rules of thumb in the literature, three stated that they were unable to organize more groups for practical reasons, while 28 studies stated that they had reached a point of saturation. Among those stating that they had reached a point of saturation, several appeared not to have followed principles from grounded theory where data collection and analysis is an iterative process until saturation is reached. Studies with high numbers of focus groups did not offer explanations for number of groups. Too much data as a study weakness was not an issue discussed in any of the reviewed papers.</p> <p>Conclusions</p> <p>Based on these findings we suggest that journals adopt more stringent requirements for focus group method reporting. The often poor and inconsistent reporting seen in these studies may also reflect the lack of clear, evidence-based guidance about deciding on sample size. More empirical research is needed to develop focus group methodology.</p

    The Gravitational Universe

    No full text
    The last century has seen enormous progress in our understanding of the Universe. We know the life cycles of stars, the structure of galaxies, the remnants of the big bang, and have a general understanding of how the Universe evolved. We have come remarkably far using electromagnetic radiation as our tool for observing the Universe. However, gravity is the engine behind many of the processes in the Universe, and much of its action is dark. Opening a gravitational window on the Universe will let us go further than any alternative. Gravity has its own messenger: Gravitational waves, ripples in the fabric of spacetime. They travel essentially undisturbed and let us peer deep into the formation of the first seed black holes, exploring redshifts as large as z ~ 20, prior to the epoch of cosmic re-ionisation. Exquisite and unprecedented measurements of black hole masses and spins will make it possible to trace the history of black holes across all stages of galaxy evolution, and at the same time constrain any deviation from the Kerr metric of General Relativity. eLISA will be the first ever mission to study the entire Universe with gravitational waves. eLISA is an all-sky monitor and will offer a wide view of a dynamic cosmos using gravitational waves as new and unique messengers to unveil The Gravitational Universe. It provides the closest ever view of the early processes at TeV energies, has guaranteed sources in the form of verification binaries in the Milky Way, and can probe the entire Universe, from its smallest scales around singularities and black holes, all the way to cosmological dimensions

    Correlated long-range mixed-harmonic fluctuations measured in pp, p+Pb and low-multiplicity Pb+Pb collisions with the ATLAS detector

    Get PDF
    For abstract see published article

    Searches for exclusive Higgs and Z boson decays into J/ψγ,ψ(2S)γ,and Υ(nS)γ at √s=13 TeV with the ATLAS detector

    Get PDF
    Searches for the exclusive decays of the Higgs and Z bosons into a J/ψ,ψ(2S), or Υ(nS)(n=1,2,3) meson and a photon are performed with a pp collision data sample corresponding to an integrated luminosity of 36.1 fb −1 collected at √s =13 TeV with the ATLAS detector at the CERN Large Hadron Collider. No significant excess of events is observed above the expected backgrounds, and 95% confidence-level upper limits on the branching fractions of the Higgs boson decays to J/ψγ, ψ(2S)γ,and Υ(nS)γ of 3.5×10 −4, 2.0×10−3,and(4.9,5.9,5.7)×10 −4,respectively, are obtained assuming Standard Model production. The corresponding 95% confidence-level upper limits for the branching fractions of the Z boson decays are 2.3×10 −6, 4.5×10 −6 and (2.8,1.7,4.8)×10 −6, respectively
    corecore