54 research outputs found

    Are people who participate in cultural activities more satisfied with life?

    Get PDF
    The influence of various aspects of life on wellbeing has been extensively researched. However, despite little empirical evidence, participation in leisure activities has been assumed to increase subjective wellbeing. Leisure is important because it is more under personal control than other sources of life satisfaction. This study asked whether people who participate in cultural leisure activities have higher life satisfaction than people who do not, if different types of leisure have the same influence on life satisfaction and if satisfaction is dependent on the frequency of participation or the number of activities undertaken. It used data from UKHLS Survey to establish associations between type, number and frequency of participation in leisure activities and life satisfaction. Results showed an independent and positive association of participation in sport, heritage and active-creative leisure activities and life satisfaction but not for participation in popular entertainment, theatre hobbies and museum/galleries. The association of reading hobbies and sedentary-creative activities and life satisfaction was negative. High life satisfaction was associated with engaging in a number of different activities rather than the frequency of participation in each of them. The results have implications for policy makers and leisure services providers, in particular those associated with heritage recreation. Subjective wellbeing measures, such as life satisfaction, and not economic measures alone should be considered in the evaluation of services. The promotion of leisure activities which are active and promote social interaction should be considered in programmes aimed at improving the quality of life

    How Tight Can PAC-Bayes be in the Small Data Regime?

    No full text
    In this paper, we investigate the question: Given a small number of datapoints, for example N = 30, how tight can PAC-Bayes and test set bounds be made? For such small datasets, test set bounds adversely affect generalisation performance by discarding data. In this setting, PAC-Bayes bounds are especially attractive, due to their ability to use all the data to simultaneously learn a posterior and bound its generalisation risk. We focus on the case of i.i.d. data with a bounded loss and consider the generic PAC-Bayes theorem of Germain et al. (2009) and Begin et al. (2016). While their theorem is known to recover many existing PAC-Bayes bounds, it is unclear what the tightest bound derivable from their framework is. Surprisingly, we show that for a fixed learning algorithm and dataset, the tightest bound of this form coincides with the tightest bound of the more restrictive family of bounds considered in Catoni (2007). In contrast, in the more natural case of distributions over datasets, we give examples (both analytic and numerical) showing that the family of bounds in Catoni (2007) can be suboptimal. Within the proof framework of Germain et al. (2009) and Begin et al. (2016), we establish a lower bound on the best bound achievable in expectation, which recovers the Chernoff test set bound in the case when the posterior is equal to the prior. Finally, to illustrate how tight these bounds can potentially be, we study a synthetic one-dimensional classification task in which it is feasible to meta-learn both the prior and the form of the bound to obtain the tightest PAC-Bayes and test set bounds possible. We find that in this simple, controlled scenario, PAC-Bayes bounds are surprisingly competitive with comparable, commonly used Chernoff test set bounds. However, the sharpest test set bounds still lead to better guarantees on the generalisation error than the PAC-Bayes bounds we consider

    'In-Between' Uncertainty in Bayesian Neural Networks

    No full text
    We describe a limitation in the expressiveness of the predictive uncertainty estimate given by mean-field variational inference (MFVI), a popular approximate inference method for Bayesian neural networks. In particular, MFVI fails to give calibrated uncertainty estimates in between separated regions of observations. This can lead to catastrophically overconfident predictions when testing on out-of-distribution data. Avoiding such overconfidence is critical for active learning, Bayesian optimisation and out-of-distribution robustness. We instead find that a classical technique, the linearised Laplace approximation, can handle 'in-between' uncertainty much better for small network architectures

    The Gaussian Neural Process

    No full text
    Neural Processes (NPs; Garnelo et al., 2018a,b) are a rich class of models for meta-learning that map data sets directly to predictive stochastic processes. We provide a rigorous analysis of the standard maximum-likelihood objective used to train conditional NPs. Moreover, we propose a new member to the Neural Process family called the Gaussian Neural Process (GNP), which models predictive correlations, incorporates translation equivariance, provides universal approximation guarantees, and demonstrates encouraging performance

    Convolutional Conditional Neural Processes

    No full text
    We introduce the Convolutional Conditional Neural Process (ConvCNP), a new member of the Neural Process family that models translation equivariance in the data. Translation equivariance is an important inductive bias for many learning problems including time series modelling, spatial data, and images. The model embeds data sets into an infinite-dimensional function space as opposed to a finite-dimensional vector space. To formalize this notion, we extend the theory of neural representations of sets to include functional representations, and demonstrate that any translation-equivariant embedding can be represented using a convolutional deep set. We evaluate ConvCNPs in several settings, demonstrating that they achieve state-of-the-art performance compared to existing NPs. We demonstrate that building in translation equivariance enables zero-shot generalization to challenging, out-of-domain tasks
    corecore