36,720 research outputs found

    Merton and the Hot Tub: Scientific Conventions and Expert Evidence in Australian Civil Procedure

    Get PDF
    Recently in Australia, common-law judges began to modify the way expert evidence is prepared and presented. Judges from a range of civil jurisdictions have conscientiously sought to reduce expert partisanship and the extent of expert disagreement in an attempt to enhance procedural efficiency and improve access to justice. One of these reforms, concurrent evidence, enables expert witnesses to participate in a joint session with considerable testimonial latitude. This represents a shift away from an adversarial approach and a conscientious attempt to foster scientific values and norms. Here, Edmond describes how changes to Australian civil procedure, motivated by judicial concerns about the prevalence of partisanship among expert witnesses, may have been enfeebled because they were based upon enduring scientific conventions such as the ethos of science

    Merton and the Hot Tub: Scientific Conventions and Expert Evidence in Australian Civil Procedure

    Get PDF
    Recently in Australia, common-law judges began to modify the way expert evidence is prepared and presented. Judges from a range of civil jurisdictions have conscientiously sought to reduce expert partisanship and the extent of expert disagreement in an attempt to enhance procedural efficiency and improve access to justice. One of these reforms, concurrent evidence, enables expert witnesses to participate in a joint session with considerable testimonial latitude. This represents a shift away from an adversarial approach and a conscientious attempt to foster scientific values and norms. Here, Edmond describes how changes to Australian civil procedure, motivated by judicial concerns about the prevalence of partisanship among expert witnesses, may have been enfeebled because they were based upon enduring scientific conventions such as the ethos of science

    No More Laissez Faire? Expert Evidence, Rule Changes and Reliability: Can More Effective Training for the Bar and Judiciary Prevent Miscarriages of Justice?

    Get PDF
    The apparent link between miscarriages of justice in prosecutions involving expert evidence and the level of training provided to the legal profession (the Bar in particular) and the judiciary in respect of such evidence was highlighted in 2005 with the publication of the House of Commons Science and Technology Committee Report Expert Evidence on Trial.2 The Law Commission, in the 2011 Report Expert Evidence in England and Wales 3 subsequently comprehensively addressed the same issue. This article seeks to consider why appropriate training in relation to expert evidence is so necessary and questions whether, in the context of the amendments to what is now Part 19 of the Criminal Procedure Rules (CrimPR19) and Part 19A of the Criminal Practice Direction (CrimPD19A), there have been sufficient developments in training to effect a cultural change within the legal profession and ultimately substantially reduce the risk of future miscarriages of justice. Finally, the article debates the nature of required training, arguing that much more detailed training is required than has previously been considered and addresses where this training best sits

    Understanding collaborative supply chain relationships through the application of the Williamson organisational failure framework

    Get PDF
    Many researchers have studied supply chain relationships however, the preponderance of open markets situations and ‘industry-style’ surveys have reduced the empirical focus on the dynamics of long-term, collaborative dyadic relationships. Within the supply chain the need for much closer, long-term relationships is increasing due to supplier rationalisation and globalisation (Spekman et al, 1998) and more information about these interactions is required. The research specifically tested the well-accepted Williamson’s (1975) Economic Organisations Failure Framework as a theoretical model through which long term collaborative relationships can be

    NeuroAttack: Undermining Spiking Neural Networks Security through Externally Triggered Bit-Flips

    Get PDF
    Due to their proven efficiency, machine-learning systems are deployed in a wide range of complex real-life problems. More specifically, Spiking Neural Networks (SNNs) emerged as a promising solution to the accuracy, resource-utilization, and energy-efficiency challenges in machine-learning systems. While these systems are going mainstream, they have inherent security and reliability issues. In this paper, we propose NeuroAttack, a cross-layer attack that threatens the SNNs integrity by exploiting low-level reliability issues through a high-level attack. Particularly, we trigger a fault-injection based sneaky hardware backdoor through a carefully crafted adversarial input noise. Our results on Deep Neural Networks (DNNs) and SNNs show a serious integrity threat to state-of-the art machine-learning techniques.Comment: Accepted for publication at the 2020 International Joint Conference on Neural Networks (IJCNN

    The Reliability of the Adversarial System to Assess the Scientific Validity of Forensic Evidence

    Get PDF
    This Article was prepared as a companion to the Fordham Law Review Reed Symposium on Forensic Expert Testimony, Daubert, and Rule 702, held on October 27, 2017, at Boston College School of Law. The Symposium took place under the sponsorship of the Judicial Conference Advisory Committee on Evidence Rules. For an overview of the Symposium, see Daniel J. Capra, Foreword: Symposium on Forensic Testimony, Daubert, and Rule 702, 86 Fordham L. Rev. 1459 (2018)

    CEPS Task Force on Artificial Intelligence and Cybersecurity Technology, Governance and Policy Challenges Task Force Evaluation of the HLEG Trustworthy AI Assessment List (Pilot Version). CEPS Task Force Report 22 January 2020

    Get PDF
    The Centre for European Policy Studies launched a Task Force on Artificial Intelligence (AI) and Cybersecurity in September 2019. The goal of this Task Force is to bring attention to the market, technical, ethical and governance challenges posed by the intersection of AI and cybersecurity, focusing both on AI for cybersecurity but also cybersecurity for AI. The Task Force is multi-stakeholder by design and composed of academics, industry players from various sectors, policymakers and civil society. The Task Force is currently discussing issues such as the state and evolution of the application of AI in cybersecurity and cybersecurity for AI; the debate on the role that AI could play in the dynamics between cyber attackers and defenders; the increasing need for sharing information on threats and how to deal with the vulnerabilities of AI-enabled systems; options for policy experimentation; and possible EU policy measures to ease the adoption of AI in cybersecurity in Europe. As part of such activities, this report aims at assessing the High-Level Expert Group (HLEG) on AI Ethics Guidelines for Trustworthy AI, presented on April 8, 2019. In particular, this report analyses and makes suggestions on the Trustworthy AI Assessment List (Pilot version), a non-exhaustive list aimed at helping the public and the private sector in operationalising Trustworthy AI. The list is composed of 131 items that are supposed to guide AI designers and developers throughout the process of design, development, and deployment of AI, although not intended as guidance to ensure compliance with the applicable laws. The list is in its piloting phase and is currently undergoing a revision that will be finalised in early 2020. This report would like to contribute to this revision by addressing in particular the interplay between AI and cybersecurity. This evaluation has been made according to specific criteria: whether and how the items of the Assessment List refer to existing legislation (e.g. GDPR, EU Charter of Fundamental Rights); whether they refer to moral principles (but not laws); whether they consider that AI attacks are fundamentally different from traditional cyberattacks; whether they are compatible with different risk levels; whether they are flexible enough in terms of clear/easy measurement, implementation by AI developers and SMEs; and overall, whether they are likely to create obstacles for the industry. The HLEG is a diverse group, with more than 50 members representing different stakeholders, such as think tanks, academia, EU Agencies, civil society, and industry, who were given the difficult task of producing a simple checklist for a complex issue. The public engagement exercise looks successful overall in that more than 450 stakeholders have signed in and are contributing to the process. The next sections of this report present the items listed by the HLEG followed by the analysis and suggestions raised by the Task Force (see list of the members of the Task Force in Annex 1)
    • …
    corecore