3,473 research outputs found

    The Participation Gap: Evidence from Compulsory Voting Laws

    Get PDF
    Why do some people go to the polling station, sometimes up to several times a year, while others always prefer to stay at home? This question has launched a wide theoretical debate in both economics and political science, but convincing empirical support for the different models proposed is still rare. The basic rational voting model of Downs (1957) predicts zero participation because each individual vote is extremely unlikely to be pivotal. One prominent modification of this model is the inclusion of a civic duty term into the voter's utility function (Riker and Ordeshook, 1968) which has been the basis of structural ethical voting models such as Coate and Conlin (2004) and Feddersen and Sandroni (2006). Another branch of structural models looks at informational asymmetries among citizens (Feddersen and Pesendorfer, 1996, 1999). This paper tests the implications of these two branches of structural models by exploiting a unique variability in compulsory voting laws in Swiss federal states. By analyzing a newly compiled comparative data set covering the 1900-1950 period, we find large positive effects of the introduction of compulsory voting laws on turnout. Along with the arguably exogenous treatment allocation, several specification and placebo tests lend support to a causal interpretation of this result. The findings of this study lend support to the ethical voting models since citizens do react to compulsory voting laws only if it is enforced with a fee. At the same time, the informational aspect of non-voting is questioned as „new" voters do not delegate their votes.Compulsory Voting, Voter Turnout, Structural Voting Models

    The Participation Gap Evidence from Compulsory Voting Laws

    Get PDF
    Why do some people go to the polling station, sometimes up to several times a year, while others always prefer to stay at home? This question has launched a wide theoretical debate in both economics and political science, but convincing empirical support for the different models proposed is still rare. The basic rational voting model of Downs (1957) predicts zero participation because each individual vote is extremely unlikely to be pivotal. One prominent modification of this model is the inclusion of a civic duty term into the voter’s utility function (Riker and Ordeshook, 1968) which has been the basis of structural ethical voting models such as Coate and Conlin (2004) and Feddersen and Sandroni (2006). Another branch of structural models looks at informational asymmetries among citizens (Feddersen and Pesendorfer, 1996, 1999). This paper tests the implications of these two branches of structural models by exploiting a unique variability in compulsory voting laws in Swiss federal states. By analyzing a newly compiled comparative data set covering the 1900-1950 period, we find large positive effects of the introduction of compulsory voting laws on turnout. Along with the arguably exogenous treatment allocation, several specification and placebo tests lend support to a causal interpretation of this result. The findings of this study lend support to the ethical voting models since citizens do react to compulsory voting laws only if it is enforced with a fee. At the same time, the informational aspect of non-voting is questioned as “new” voters do not delegate their votes

    Timing in Technischen Sicherheitsanforderungen für Systementwürfe mit heterogenen Kritikalitätsanforderungen

    Get PDF
    Traditionally, timing requirements as (technical) safety requirements have been avoided through clever functional designs. New vehicle automation concepts and other applications, however, make this harder or even impossible and challenge design automation for cyber-physical systems to provide a solution. This thesis takes upon this challenge by introducing cross-layer dependency analysis to relate timing dependencies in the bounded execution time (BET) model to the functional model of the artifact. In doing so, the analysis is able to reveal where timing dependencies may violate freedom from interference requirements on the functional layer and other intermediate model layers. For design automation this leaves the challenge how such dependencies are avoided or at least be bounded such that the design is feasible: The results are synthesis strategies for implementation requirements and a system-level placement strategy for run-time measures to avoid potentially catastrophic consequences of timing dependencies which are not eliminated from the design. Their applicability is shown in experiments and case studies. However, all the proposed run-time measures as well as very strict implementation requirements become ever more expensive in terms of design effort for contemporary embedded systems, due to the system's complexity. Hence, the second part of this thesis reflects on the design aspect rather than the analysis aspect of embedded systems and proposes a timing predictable design paradigm based on System-Level Logical Execution Time (SL-LET). Leveraging a timing-design model in SL-LET the proposed methods from the first part can now be applied to improve the quality of a design -- timing error handling can now be separated from the run-time methods and from the implementation requirements intended to guarantee them. The thesis therefore introduces timing diversity as a timing-predictable execution theme that handles timing errors without having to deal with them in the implemented application. An automotive 3D-perception case study demonstrates the applicability of timing diversity to ensure predictable end-to-end timing while masking certain types of timing errors.Traditionell wurden Timing-Anforderungen als (technische) Sicherheitsanforderungen durch geschickte funktionale Entwürfe vermieden. Neue Fahrzeugautomatisierungskonzepte und Anwendungen machen dies jedoch schwieriger oder gar unmöglich; Aufgrund der Problemkomplexität erfordert dies eine Entwurfsautomatisierung für cyber-physische Systeme heraus. Diese Arbeit nimmt sich dieser Herausforderung an, indem sie eine schichtenübergreifende Abhängigkeitsanalyse einführt, um zeitliche Abhängigkeiten im Modell der beschränkten Ausführungszeit (BET) mit dem funktionalen Modell des Artefakts in Beziehung zu setzen. Auf diese Weise ist die Analyse in der Lage, aufzuzeigen, wo Timing-Abhängigkeiten die Anforderungen an die Störungsfreiheit auf der funktionalen Schicht und anderen dazwischenliegenden Modellschichten verletzen können. Für die Entwurfsautomatisierung ergibt sich daraus die Herausforderung, wie solche Abhängigkeiten vermieden oder zumindest so eingegrenzt werden können, dass der Entwurf machbar ist: Das Ergebnis sind Synthesestrategien für Implementierungsanforderungen und eine Platzierungsstrategie auf Systemebene für Laufzeitmaßnahmen zur Vermeidung potentiell katastrophaler Folgen von Timing-Abhängigkeiten, die nicht aus dem Entwurf eliminiert werden. Ihre Anwendbarkeit wird in Experimenten und Fallstudien gezeigt. Allerdings werden alle vorgeschlagenen Laufzeitmaßnahmen sowie sehr strenge Implementierungsanforderungen für moderne eingebettete Systeme aufgrund der Komplexität des Systems immer teurer im Entwurfsaufwand. Daher befasst sich der zweite Teil dieser Arbeit eher mit dem Entwurfsaspekt als mit dem Analyseaspekt von eingebetteten Systemen und schlägt ein Entwurfsparadigma für vorhersagbares Timing vor, das auf der System-Level Logical Execution Time (SL-LET) basiert. Basierend auf einem Timing-Entwurfsmodell in SL-LET können die vorgeschlagenen Methoden aus dem ersten Teil nun angewandt werden, um die Qualität eines Entwurfs zu verbessern -- die Behandlung von Timing-Fehlern kann nun von den Laufzeitmethoden und von den Implementierungsanforderungen, die diese garantieren sollen, getrennt werden. In dieser Arbeit wird daher Timing Diversity als ein Thema der Timing-Vorhersage in der Ausführung eingeführt, das Timing-Fehler behandelt, ohne dass sie in der implementierten Anwendung behandelt werden müssen. Anhand einer Fallstudie aus dem Automobilbereich (3D-Umfeldwahrnehmung) wird die Anwendbarkeit von Timing-Diversität demonstriert, um ein vorhersagbares Ende-zu-Ende-Timing zu gewährleisten und gleichzeitig in der Lage zu sein, bestimmte Arten von Timing-Fehlern zu maskieren

    To Trust or Not To Trust Prediction Scores for Membership Inference Attacks

    Full text link
    Membership inference attacks (MIAs) aim to determine whether a specific sample was used to train a predictive model. Knowing this may indeed lead to a privacy breach. Most MIAs, however, make use of the model's prediction scores - the probability of each output given some input - following the intuition that the trained model tends to behave differently on its training data. We argue that this is a fallacy for many modern deep network architectures. Consequently, MIAs will miserably fail since overconfidence leads to high false-positive rates not only on known domains but also on out-of-distribution data and implicitly acts as a defense against MIAs. Specifically, using generative adversarial networks, we are able to produce a potentially infinite number of samples falsely classified as part of the training data. In other words, the threat of MIAs is overestimated, and less information is leaked than previously assumed. Moreover, there is actually a trade-off between the overconfidence of models and their susceptibility to MIAs: the more classifiers know when they do not know, making low confidence predictions, the more they reveal the training data.Comment: 15 pages, 8 figures, 10 table

    Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks

    Full text link
    Label smoothing -- using softened labels instead of hard ones -- is a widely adopted regularization method for deep learning, showing diverse benefits such as enhanced generalization and calibration. Its implications for preserving model privacy, however, have remained unexplored. To fill this gap, we investigate the impact of label smoothing on model inversion attacks (MIAs), which aim to generate class-representative samples by exploiting the knowledge encoded in a classifier, thereby inferring sensitive information about its training data. Through extensive analyses, we uncover that traditional label smoothing fosters MIAs, thereby increasing a model's privacy leakage. Even more, we reveal that smoothing with negative factors counters this trend, impeding the extraction of class-related information and leading to privacy preservation, beating state-of-the-art defenses. This establishes a practical and powerful novel way for enhancing model resilience against MIAs.Comment: 23 pages, 8 tables, 8 figure

    Asymmetric effects of group-based appeals: the case of the urban rural divide

    Get PDF
    Group-based identities are an important basis of political competition. Parties appeal consciously to specific social groups and these group-based appeals often improve the evaluation of parties and candidates. Studying place-based appeals, we advance the understanding of this strategy by distinguishing between dominant and subordinate social groups. Using two survey experiments in Germany and England, we show that group appeals improve candidate evaluation among subordinate (rural) voters. By contrast, appeals to the dominant (urban) group trigger a negative reaction. While urban citizens’ weaker local identities and lower place-based resentment partly explain this asymmetry, they mainly dislike group-based appeals because of their antagonistic nature. If the same policies are framed as benefiting urban and rural dwellers alike, candidate evaluation improves. Thus, people on the dominant side of a group divide reject a framing of politics as antagonistically structured by this divide, even if they identify with the dominant group

    The Biased Artist: Exploiting Cultural Biases via Homoglyphs in Text-Guided Image Generation Models

    Full text link
    Text-guided image generation models, such as DALL-E 2 and Stable Diffusion, have recently received much attention from academia and the general public. Provided with textual descriptions, these models are capable of generating high-quality images depicting various concepts and styles. However, such models are trained on large amounts of public data and implicitly learn relationships from their training data that are not immediately apparent. We demonstrate that common multimodal models implicitly learned cultural biases that can be triggered and injected into the generated images by simply replacing single characters in the textual description with visually similar non-Latin characters. These so-called homoglyph replacements enable malicious users or service providers to induce biases into the generated images and even render the whole generation process useless. We practically illustrate such attacks on DALL-E 2 and Stable Diffusion as text-guided image generation models and further show that CLIP also behaves similarly. Our results further indicate that text encoders trained on multilingual data provide a way to mitigate the effects of homoglyph replacements.Comment: 31 pages, 19 figures, 4 table

    Rickrolling the Artist: Injecting Invisible Backdoors into Text-Guided Image Generation Models

    Full text link
    While text-to-image synthesis currently enjoys great popularity among researchers and the general public, the security of these models has been neglected so far. Many text-guided image generation models rely on pre-trained text encoders from external sources, and their users trust that the retrieved models will behave as promised. Unfortunately, this might not be the case. We introduce backdoor attacks against text-guided generative models and demonstrate that their text encoders pose a major tampering risk. Our attacks only slightly alter an encoder so that no suspicious model behavior is apparent for image generations with clean prompts. By then inserting a single non-Latin character into the prompt, the adversary can trigger the model to either generate images with pre-defined attributes or images following a hidden, potentially malicious description. We empirically demonstrate the high effectiveness of our attacks on Stable Diffusion and highlight that the injection process of a single backdoor takes less than two minutes. Besides phrasing our approach solely as an attack, it can also force an encoder to forget phrases related to certain concepts, such as nudity or violence, and help to make image generation safer.Comment: 25 pages, 16 figures, 5 table
    corecore