47,630 research outputs found

    Victims’ Rights in an Adversary System

    Get PDF
    The victims\u27 rights movement argues that because the outcome of criminal prosecutions affects crime victims, the justice system should consider their interests during proceedings. In 2004, Congress passed the Crime Victims\u27 Rights Act (CVRA), giving victims some rights to participate in the federal criminal justice system. This Note probes both the theoretical assumptions and practical implications of the CVRA. It demonstrates that the victims\u27 rights movement revisits a long-acknowledged tension between adversary adjudication and third-party interests. It shows, however, that American law has resolved this tension by conferring party or quasi-party status on third parties. Despite some pro-victims rhetoric, Congress reaffirmed the public-prosecution model when it passed the CVRA. Instead of making victims parties or intervenors in criminal prosecutions, the CVRA asks courts and prosecutors to vindicate victims\u27 interests. This unusual posture creates substantial conflicts for courts and prosecutors and undermines defendants\u27 rights. To avoid these consequences, this Note argues, courts can interpret the CVRA\u27s substantive rights narrowly. Rather than reading the CVRA as conferring broad rights on crime victims, courts should interpret the statute to simply require institutional courtesy toward crime victims. This interpretation reflects victims\u27 nonparty status and preserves the rights and responsibilities of courts, prosecutors, and defendants

    Towards Measuring Adversarial Twitter Interactions against Candidates in the US Midterm Elections

    Full text link
    Adversarial interactions against politicians on social media such as Twitter have significant impact on society. In particular they disrupt substantive political discussions online, and may discourage people from seeking public office. In this study, we measure the adversarial interactions against candidates for the US House of Representatives during the run-up to the 2018 US general election. We gather a new dataset consisting of 1.7 million tweets involving candidates, one of the largest corpora focusing on political discourse. We then develop a new technique for detecting tweets with toxic content that are directed at any specific candidate.Such technique allows us to more accurately quantify adversarial interactions towards political candidates. Further, we introduce an algorithm to induce candidate-specific adversarial terms to capture more nuanced adversarial interactions that previous techniques may not consider toxic. Finally, we use these techniques to outline the breadth of adversarial interactions seen in the election, including offensive name-calling, threats of violence, posting discrediting information, attacks on identity, and adversarial message repetition

    Model the System from Adversary Viewpoint: Threats Identification and Modeling

    Full text link
    Security attacks are hard to understand, often expressed with unfriendly and limited details, making it difficult for security experts and for security analysts to create intelligible security specifications. For instance, to explain Why (attack objective), What (i.e., system assets, goals, etc.), and How (attack method), adversary achieved his attack goals. We introduce in this paper a security attack meta-model for our SysML-Sec framework, developed to improve the threat identification and modeling through the explicit representation of security concerns with knowledge representation techniques. Our proposed meta-model enables the specification of these concerns through ontological concepts which define the semantics of the security artifacts and introduced using SysML-Sec diagrams. This meta-model also enables representing the relationships that tie several such concepts together. This representation is then used for reasoning about the knowledge introduced by system designers as well as security experts through the graphical environment of the SysML-Sec framework.Comment: In Proceedings AIDP 2014, arXiv:1410.322

    Machine Learning Models that Remember Too Much

    Full text link
    Machine learning (ML) is becoming a commodity. Numerous ML frameworks and services are available to data holders who are not ML experts but want to train predictive models on their data. It is important that ML models trained on sensitive inputs (e.g., personal images or documents) not leak too much information about the training data. We consider a malicious ML provider who supplies model-training code to the data holder, does not observe the training, but then obtains white- or black-box access to the resulting model. In this setting, we design and implement practical algorithms, some of them very similar to standard ML techniques such as regularization and data augmentation, that "memorize" information about the training dataset in the model yet the model is as accurate and predictive as a conventionally trained model. We then explain how the adversary can extract memorized information from the model. We evaluate our techniques on standard ML tasks for image classification (CIFAR10), face recognition (LFW and FaceScrub), and text analysis (20 Newsgroups and IMDB). In all cases, we show how our algorithms create models that have high predictive power yet allow accurate extraction of subsets of their training data
    • …
    corecore