24 research outputs found

    A Human Rights Perspective on Professional Responsibility in Global Corporate Practice

    Get PDF
    The direct applicability of human rights law to the attorney-client relationship has serious implications for ethical corporate governance. In addition to creating criminal and civil risks for lawyer and client alike, the specter of human rights violations in business dealings gives rise to myriad ethical questions for corporate lawyers to consider and resolve. These include matters such as the legitimate object and scope of corporate representation, conflicts of interest, duties to withdraw, and matters of competence and communication in corporate governance. They also raise questions of professional secrecy and whether ethical codes permit (or even require) lawyers to reveal confidential information, either to prevent harm or to protect the corporate client from its own malfeasant employees. These ethical concerns also affect supervisory relationships and duties to report misconduct by other lawyers

    The Razor’s Edge: Defining and Protecting Human Groups under the Genocide Convention

    Get PDF

    From Automation to Autonomy: Legal and Ethical Responsibility Gaps in Artificial Intelligence Innovation

    Get PDF
    The increasing prominence of artificial intelligence (AI) systems in daily life and the evolving capacity of these systems to process data and act without human input raise important legal and ethical concerns. This article identifies three primary AI actors in the value chain (innovators, providers, and users) and three primary types of AI (automation, augmentation, and autonomy). It then considers responsibility in AI innovation from two perspectives: (i) strict liability claims arising out of the development, commercialization, and use of products with built-in AI capabilities (designated herein as “AI artifacts”); and (ii) an original research study on the ethical practices of developers and managers creating AI systems and AI artifacts. The ethical perspective is important because, at the moment, the law is poised to fall behind technological reality—if it hasn’t already. Consideration of the liability issues in tandem with ethical perspectives yields a more nuanced assessment of the likely consequences and adverse impacts of AI innovation. Companies should consider both legal and ethical strategies thinking about their own liability and ways to limit it, as well as policymakers considering AI regulation ex ante

    Using conceptual metaphor and functional grammar to explore how language used in physics affects student learning

    Full text link
    This paper introduces a theory about the role of language in learning physics. The theory is developed in the context of physics students' and physicists' talking and writing about the subject of quantum mechanics. We found that physicists' language encodes different varieties of analogical models through the use of grammar and conceptual metaphor. We hypothesize that students categorize concepts into ontological categories based on the grammatical structure of physicists' language. We also hypothesize that students over-extend and misapply conceptual metaphors in physicists' speech and writing. Using our theory, we will show how, in some cases, we can explain student difficulties in quantum mechanics as difficulties with language.Comment: Accepted for publication in Phys. Rev. ST:PE

    The case for a crime of political genocide under international law

    No full text
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    From Automation to Autonomy: Legal and Ethical Responsibility Gaps in Artificial Intelligence Innovation

    No full text
    The increasing prominence of artificial intelligence (AI) systems in daily life and the evolving capacity of these systems to process data and act without human input raise important legal and ethical concerns. This article identifies three primary AI actors in the value chain (innovators, providers, and users) and three primary types of AI (automation, augmentation, and autonomy). It then considers responsibility in AI innovation from two perspectives: (i) strict liability claims arising out of the development, commercialization, and use of products with built-in AI capabilities (designated herein as “AI artifacts”); and (ii) an original research study on the ethical practices of developers and managers creating AI systems and AI artifacts. The ethical perspective is important because, at the moment, the law is poised to fall behind technological reality—if it hasn’t already. Consideration of the liability issues in tandem with ethical perspectives yields a more nuanced assessment of the likely consequences and adverse impacts of AI innovation. Companies should consider both legal and ethical strategies thinking about their own liability and ways to limit it, as well as policymakers considering AI regulation ex ante
    corecore