11,758 research outputs found

    Welfarism and the multidimensionality of welfare state legitimacy: evidence from The Netherlands, 2006

    Get PDF
    Is it possible that citizens who support a substantial role for government in the provision of welfare are, at the same time, critical about specific aspects of such provision? Based on confirmatory factor analyses, and using a 2006 Dutch survey, this study shows that welfare state legitimacy is indeed multidimensional, i.e. that opinions tend to cluster together in several dimensions referring to various aspects of the welfare state. There is partial evidence for the existence of a single, underlying welfarism dimension which consists basically of views regarding the range of governmental responsibility, as well as of the idea that these governmental provisions do not have unfavourable repercussions in economic or moral spheres. However, the separate dimensions cannot be reduced entirely to this overall welfarism dimension. This is illustrated by the finding that the various attitude dimensions are affected differently by socio-structural position and ideological dispositions

    SoK: Chasing Accuracy and Privacy, and Catching Both in Differentially Private Histogram Publication

    Get PDF
    Histograms and synthetic data are of key importance in data analysis. However, researchers have shown that even aggregated data such as histograms, containing no obvious sensitive attributes, can result in privacy leakage. To enable data analysis, a strong notion of privacy is required to avoid risking unintended privacy violations.Such a strong notion of privacy is differential privacy, a statistical notion of privacy that makes privacy leakage quantifiable. The caveat regarding differential privacy is that while it has strong guarantees for privacy, privacy comes at a cost of accuracy. Despite this trade-off being a central and important issue in the adoption of differential privacy, there exists a gap in the literature regarding providing an understanding of the trade-off and how to address it appropriately. Through a systematic literature review (SLR), we investigate the state-of-the-art within accuracy improving differentially private algorithms for histogram and synthetic data publishing. Our contribution is two-fold: 1) we identify trends and connections in the contributions to the field of differential privacy for histograms and synthetic data and 2) we provide an understanding of the privacy/accuracy trade-off challenge by crystallizing different dimensions to accuracy improvement. Accordingly, we position and visualize the ideas in relation to each other and external work, and deconstruct each algorithm to examine the building blocks separately with the aim of pinpointing which dimension of accuracy improvement each technique/approach is targeting. Hence, this systematization of knowledge (SoK) provides an understanding of in which dimensions and how accuracy improvement can be pursued without sacrificing privacy

    Inferring the photometric and size evolution of galaxies from image simulations

    Full text link
    Current constraints on models of galaxy evolution rely on morphometric catalogs extracted from multi-band photometric surveys. However, these catalogs are altered by selection effects that are difficult to model, that correlate in non trivial ways, and that can lead to contradictory predictions if not taken into account carefully. To address this issue, we have developed a new approach combining parametric Bayesian indirect likelihood (pBIL) techniques and empirical modeling with realistic image simulations that reproduce a large fraction of these selection effects. This allows us to perform a direct comparison between observed and simulated images and to infer robust constraints on model parameters. We use a semi-empirical forward model to generate a distribution of mock galaxies from a set of physical parameters. These galaxies are passed through an image simulator reproducing the instrumental characteristics of any survey and are then extracted in the same way as the observed data. The discrepancy between the simulated and observed data is quantified, and minimized with a custom sampling process based on adaptive Monte Carlo Markov Chain methods. Using synthetic data matching most of the properties of a CFHTLS Deep field, we demonstrate the robustness and internal consistency of our approach by inferring the parameters governing the size and luminosity functions and their evolutions for different realistic populations of galaxies. We also compare the results of our approach with those obtained from the classical spectral energy distribution fitting and photometric redshift approach.Our pipeline infers efficiently the luminosity and size distribution and evolution parameters with a very limited number of observables (3 photometric bands). When compared to SED fitting based on the same set of observables, our method yields results that are more accurate and free from systematic biases.Comment: 24 pages, 12 figures, accepted for publication in A&

    Forecasting groundwater contaminant plume development using statistical and machine learning methods

    Get PDF
    2022 Spring.Includes bibliographical references.A persistent challenge in predicting the fate and transport of groundwater contaminants is the inherent geologic heterogeneity of the subsurface. Contaminant movement has been primarily modeled by simplifying the geology and accepting assumptions to solve the advection- dispersion-reaction equation. With the large groundwater quality datasets that have been collected for decades at legacy contaminated sites, there is an emerging potential to use data- driven machine learning algorithms to model contaminant plume development and improve site management. However, spatial and temporal data density and quality requirements for accurate plume forecasting have yet to be determined. In this study, extensive historical datasets from groundwater monitoring well samples were initially used with the intent to increase our understanding of complex interrelations between groundwater quality parameters and to build a suitable model for estimating the time to site closure. After correlation analyses applied to the entire datasets did not reveal compelling correlation coefficients, likely due to poor data quality from integrated well samples, the initial task was reversed to determine how many data are needed for accurate groundwater plume forecasting. A reactive transport model for a focus area downgradient of a zero-valent iron permeable reactive barrier was developed to generate a detailed, synthetic carbon tetrachloride concentration dataset that was input to two forecasting models, Prophet and the damped Holt's method. By increasing the temporal sampling schedule from the industry norm of quarterly to monthly, the plume development forecasts improved such that times to site closure were accurately predicted. For wells with declining contaminant concentrations, the damped Holt's method achieved more accurate forecasts than Prophet. However, only Prophet allows for the inclusion of exogenous regressors such as temporal concentration changes in upgradient wells, enabling the predictions of future declining trends in wells with still increasing contaminant concentrations. The value of machine learning models for contaminant fate and transport prediction is increasingly apparent, but changes in groundwater sampling will be required to take full advantage of data-driven contaminant plume forecasting. As the quantity and quality of data collection increases, aided by sensors and automated sampling, these tools will become an integral part of contaminated site management. Spatial high-resolution data, for instance from multi-level samplers, have previously transformed our understanding of contaminant fate and transport in the subsurface, and improved our ability to manage sites. The collection of temporal high-resolution data will similarly revolutionize our ability to forecast contaminant plume behavior

    On the road to carbon reduction in a food supply network: a complex adaptive systems perspective

    Get PDF
    Purpose: In acknowledging the reality of climate change, large firms have set internal and external (supplier oriented) targets to reduce their greenhouse gas (GHG) emissions. This study explores the complex processes behind the evolution and diffusion of carbon reduction strategies in supply networks. Design/methodology/approach: The research uses complex adaptive systems (CAS) as a theoretical framework and presents a single case study of a focal buying firm and its supply network in the food sector. A longitudinal and multilevel analysis is used to discuss the dynamics between the focal firm, the supply network and external environment. Findings: Rather than being a linear and controlled process of adoption-implementation-outcomes, the transition to reduce carbon in a supply network is much more dynamic, emerging as a result of a number of factors at the individual, organizational, supply network and environmental levels. Research limitations/implications: The research considers the emergence of a carbon reduction strategy in the food sector, driven by a dominant buying firm. Future research should seek to investigate the diffusion of environmental strategies more broadly and in other contexts. Practical implications: Findings from the research reveal the limits of the control that a buying firm can exert over behaviours in its network and show the positive influence of consortia initiatives on transitioning to sustainability in supply networks. Originality: CAS is a fairly novel theoretical lens for researching environmental supply network dynamics. The paper offers fresh multilevel insights into the emergent and systemic nature of the diffusion of environmental practices in supply networks

    Unveiling the multimedia unconscious: implicit cognitive processes and multimedia content analysis

    Get PDF
    One of the main findings of cognitive sciences is that automatic processes of which we are unaware shape, to a significant extent, our perception of the environment. The phenomenon applies not only to the real world, but also to multimedia data we consume every day. Whenever we look at pictures, watch a video or listen to audio recordings, our conscious attention efforts focus on the observable content, but our cognition spontaneously perceives intentions, beliefs, values, attitudes and other constructs that, while being outside of our conscious awareness, still shape our reactions and behavior. So far, multimedia technologies have neglected such a phenomenon to a large extent. This paper argues that taking into account cognitive effects is possible and it can also improve multimedia approaches. As a supporting proof-of-concept, the paper shows not only that there are visual patterns correlated with the personality traits of 300 Flickr users to a statistically significant extent, but also that the personality traits (both self-assessed and attributed by others) of those users can be inferred from the images these latter post as "favourite"

    Quantitative Perspectives on Fifty Years of the Journal of the History of Biology

    Get PDF
    Journal of the History of Biology provides a fifty-year long record for examining the evolution of the history of biology as a scholarly discipline. In this paper, we present a new dataset and preliminary quantitative analysis of the thematic content of JHB from the perspectives of geography, organisms, and thematic fields. The geographic diversity of authors whose work appears in JHB has increased steadily since 1968, but the geographic coverage of the content of JHB articles remains strongly lopsided toward the United States, United Kingdom, and western Europe and has diversified much less dramatically over time. The taxonomic diversity of organisms discussed in JHB increased steadily between 1968 and the late 1990s but declined in later years, mirroring broader patterns of diversification previously reported in the biomedical research literature. Finally, we used a combination of topic modeling and nonlinear dimensionality reduction techniques to develop a model of multi-article fields within JHB. We found evidence for directional changes in the representation of fields on multiple scales. The diversity of JHB with regard to the representation of thematic fields has increased overall, with most of that diversification occurring in recent years. Drawing on the dataset generated in the course of this analysis, as well as web services in the emerging digital history and philosophy of science ecosystem, we have developed an interactive web platform for exploring the content of JHB, and we provide a brief overview of the platform in this article. As a whole, the data and analyses presented here provide a starting-place for further critical reflection on the evolution of the history of biology over the past half-century.Comment: 45 pages, 14 figures, 4 table
    corecore