10 research outputs found

    Nonresponse in the 1996 income survey (supplement to the microcensus)

    Full text link
    Einkommensstudie: ergänzender freiwilliger Fragebogen für ein Viertel des Mikrozensus vom April 1996, 18117 Haushalte, 16% Verweigerer (2988). Die eine Beteiligung verweigernden Haushalte sind nicht zufallsverteilt. Vielmehr weisen einige im Rahmen des Mikrozensus erhobenen Merkmale eine Verbindung mit Antwortverweigerung auf. Vor allem das Qualifikationsniveau des Haushaltsvorstands und regionale Faktoren sind für das Antwortverhalten aussagekräftig. Ältere Haushalte auf dem Land mit niedrigem Einkommen und niedrigem Qualifikationsniveau weisen eine höhere Beteiligung an der Befragung auf, während in Budapest, vor allem in den Gruppen mit hohem Einkommen und hoher Qualifikation, die Quote der Antwortverweigerer hoch ist. (ICEÜbers)"Income survey: supplementary voluntary questionnaire to the randomly selected one quarter of the April 1996 microcensus, 18.117 households, 16% (2.988) refusals. The characteristics of the households which refused to answer the income survey (they are not randomly distributed) were carefully studied The most important results and measures which were done to reduce the bias are presented. A substantial number of census variables for households were found to be associated with nonresponse. The characteristics most strongly associated with Income Survey response rate were the qualification level of head of household and the type of region. Higher response rate was found in the countryside among the older households with low income and low qualification and high refusal rate in Budapest, mainly in high income groups with high qualification level, as well." (author's abstract

    Guarantee Regions for Local Explanations

    Full text link
    Interpretability methods that utilise local surrogate models (e.g. LIME) are very good at describing the behaviour of the predictive model at a point of interest, but they are not guaranteed to extrapolate to the local region surrounding the point. However, overfitting to the local curvature of the predictive model and malicious tampering can significantly limit extrapolation. We propose an anchor-based algorithm for identifying regions in which local explanations are guaranteed to be correct by explicitly describing those intervals along which the input features can be trusted. Our method produces an interpretable feature-aligned box where the prediction of the local surrogate model is guaranteed to match the predictive model. We demonstrate that our algorithm can be used to find explanations with larger guarantee regions that better cover the data manifold compared to existing baselines. We also show how our method can identify misleading local explanations with significantly poorer guarantee regions

    Sampling the Variational Posterior with Local Refinement.

    Get PDF
    Variational inference is an optimization-based method for approximating the posterior distribution of the parameters in Bayesian probabilistic models. A key challenge of variational inference is to approximate the posterior with a distribution that is computationally tractable yet sufficiently expressive. We propose a novel method for generating samples from a highly flexible variational approximation. The method starts with a coarse initial approximation and generates samples by refining it in selected, local regions. This allows the samples to capture dependencies and multi-modality in the posterior, even when these are absent from the initial approximation. We demonstrate theoretically that our method always improves the quality of the approximation (as measured by the evidence lower bound). In experiments, our method consistently outperforms recent variational inference methods in terms of log-likelihood and ELBO across three example tasks: the Eight-Schools example (an inference task in a hierarchical model), training a ResNet-20 (Bayesian inference in a large neural network), and the Mushroom task (posterior sampling in a contextual bandit problem)

    Sampling the Variational Posterior with Local Refinement

    No full text
    Variational inference is an optimization-based method for approximating the posterior distribution of the parameters in Bayesian probabilistic models. A key challenge of variational inference is to approximate the posterior with a distribution that is computationally tractable yet sufficiently expressive. We propose a novel method for generating samples from a highly flexible variational approximation. The method starts with a coarse initial approximation and generates samples by refining it in selected, local regions. This allows the samples to capture dependencies and multi-modality in the posterior, even when these are absent from the initial approximation. We demonstrate theoretically that our method always improves the quality of the approximation (as measured by the evidence lower bound). In experiments, our method consistently outperforms recent variational inference methods in terms of log-likelihood and ELBO across three example tasks: the Eight-Schools example (an inference task in a hierarchical model), training a ResNet-20 (Bayesian inference in a large neural network), and the Mushroom task (posterior sampling in a contextual bandit problem)
    corecore