294,135 research outputs found

    Is the Use of Artificial Intelligence in Alternative Dispute Resolution a Viable Option or Wishful Thinking?

    Get PDF
    This article delves into the evolving relationship between artificial intelligence (AI) and the legal profession, particularly in the context of alternative dispute resolution (ADR). The introduction sets the stage by highlighting AI\u27s transformative potential in reshaping legal practice through automation, efficiency, and data-driven insights. While acknowledging the uncertainty surrounding AI\u27s long-term impact on the legal landscape, it emphasizes the need for investigation and adaptation as the technology evolves. Key considerations, such as AI technology\u27s limitations, regulatory challenges, and ethical implications, are also addressed. Despite the promises of efficiency and accessibility, questions remain about AI\u27s ability to replicate human reasoning and navigate complex legal nuances. Moreover, legal and ethical concerns, such as privacy, confidentiality, and liability, underscore the need for careful evaluation and oversight in AI-driven dispute resolution

    Editorial: A Look at the Digitalisation of Education in the Context of Ethical, Legal, and Social Implications

    Get PDF
    In this article digital transformation in education and the associated ethical, legal, and social implications (ELSI) are considered. In order to make use of the innovation potential of mixed reality and artificial intelligence in vocational education and training, it is argued in this article that a constructive approach should be used for the ethical, legal, and social implications. To this end, after the introduction and a brief presentation of the potential of digital technologies, selected ethical, legal, and social implications are discussed in order to provide starting points and recommendations for a reflective approach to the ELSI aspects. Keywords: Digital transformation, Mixed Reality (MR), Artificial Intelligence (AI), Learning An-alytics (LA), Human-Technology Interaction (HTI), ELSIABSTRACT: In this article digital transformation in education and the associated ethical, legal, and social implications (ELSI) are considered. In order to make use of the innovation potential of mixed reality and artificial intelligence in vocational education and training, it is argued in this article that a constructive approach should be used for the ethical, legal, and social implications. To this end, after the introduction and a brief presentation of the potential of digital technologies, selected ethical, legal, and social implications are discussed in order to provide starting points and recommendations for a reflective approach to the ELSI aspects.Keywords: Digital transformation, Mixed Reality (MR), Artificial Intelligence (AI), Learning An-alytics (LA), Human-Technology Interaction (HTI), ELS

    The Good, the Bad, and the Invisible with Its Opportunity Costs: Introduction to the ‘J’ Special Issue on “the Impact of Artificial Intelligence on Law”

    Get PDF
    Scholars and institutions have been increasingly debating the moral and legal challenges of AI, together with the models of governance that should strike the balance between the opportunities and threats brought forth by AI, its ‘good’ and ‘bad’ facets. There are more than a hundred declarations on the ethics of AI and recent proposals for AI regulation, such as the European Commission’s AI Act, have further multiplied the debate. Still, a normative challenge of AI is mostly overlooked, and regards the underuse, rather than the misuse or overuse, of AI from a legal viewpoint. From health care to environmental protection, from agriculture to transportation, there are many instances of how the whole set of benefits and promises of AI can be missed or exploited far below its full potential, and for the wrong reasons: business disincentives and greed among data keepers, bureaucracy and professional reluctance, or public distrust in the era of no-vax conspiracies theories. The opportunity costs that follow this technological underuse is almost terra incognita due to the ‘invisibility’ of the phenomenon, which includes the ‘shadow prices’ of economy. This introduction provides metrics for such assessment and relates this work to the development of new standards for the field. We must quantify how much it costs not to use AI systems for the wrong reasons

    Artificial Intelligence in Canadian Healthcare: Will the Law Protect Us from Algorithmic Bias Resulting in Discrimination?

    Get PDF
    In this article, we canvas why AI may perpetuate or exacerbate extant discrimination through a review of the training, development, and implementation of healthcare-related AI applications and set out policy options to militate against such discrimination. The article is divided into eight short parts including this introduction. Part II focuses on explaining AI, some of its basic functions and processes, and its relevance to healthcare. In Part III, we define and explain the difference and relationship between algorithmic bias and data bias, both of which can result in discrimination in healthcare settings, and provide some prominent examples of healthcare-related AI applications that have resulted in discrimination or have produced discriminatory outputs. Part IV explains in more detail differences between algorithmic bias and data bias, with a focus on data bias and data governance, including the non-representativeness of data sets used in training AI. From this point we turn to look at possible legal responses to the problem of algorithmic discrimination, and, in Part V, we demonstrate the insufficiency of existing ex post legal protections (i.e., legal protections that offer redress after someone has suffered harm), including claims in negligence, under human rights legislation, and under the Charter of Rights and Freedoms. Part VI explores possibilities within the Canadian ex ante legal landscape (i.e., the regulation of AI applications before they become available for use in healthcare settings), notably through federal regulation of medical devices, and identifies gaps in oversight. Finally, in Part VII we provide recommendations for federal and provincial governments and innovators as to the appropriate governance and regulatory approach to counter algorithmic and data bias that results in discrimination in healthcare-related AI, before concluding in Part VIII

    How are excellence and trust for using artificial intelligence ensured? Evaluation of its current use in EU healthcare

    Get PDF
    Context: Artificial intelligence (AI) could be a key driver in different healthcare dossiers, ranging from preventive to diagnostic and treatment purposes. The establishment of the Artificial Intelligence High-Level Expert Group in the European Commission, as well as their White Paper, show first attempts of creating policies in the domain of artificial intelligence in the EU. Despite these policy approaches, there is a need for a coherent regulatory framework that enables the efficient use of AI in the field of health. The aim of this policy brief is to evaluate current legislative gaps in terms of the introduction of AI in healthcare, focusing on the domains of Data Protection, Liability & Transparency, as well as Robustness & Accuracy. Policy Options: This policy brief identified a high degree of eHealth infrastructure fragmentation on member state level and limited action towards a structured and coherent framework for AI in healthcare, under the domains of Data Protection, Liability & Transparency, and Robustness & Accuracy. Recommendations: A unified approach at EU-level, based on proposed recommendations and merged into the form of a Directive, is advised. The development of the Health-AI-Directive will bring progress and improvement to legal certainty in the European AI-landscape. The introduction of the Health-AI-Directive is recommended to ensure trust and excellence in the use of AI in healthcare

    The ethical, legal and social implications of using artificial intelligence systems in breast cancer care

    Get PDF
    Breast cancer care is a leading area for development of artificial intelligence (AI), with applications including screening and diagnosis, risk calculation, prognostication and clinical decision-support, management planning, and precision medicine. We review the ethical, legal and social implications of these developments. We consider the values encoded in algorithms, the need to evaluate outcomes, and issues of bias and transferability, data ownership, confidentiality and consent, and legal, moral and professional responsibility. We consider potential effects for patients, including on trust in healthcare, and provide some social science explanations for the apparent rush to implement AI solutions. We conclude by anticipating future directions for AI in breast cancer care. Stakeholders in healthcare AI should acknowledge that their enterprise is an ethical, legal and social challenge, not just a technical challenge. Taking these challenges seriously will require broad engagement, imposition of conditions on implementation, and pre-emptive systems of oversight to ensure that development does not run ahead of evaluation and deliberation. Once artificial intelligence becomes institutionalised, it may be difficult to reverse: a proactive role for government, regulators and professional groups will help ensure introduction in robust research contexts, and the development of a sound evidence base regarding real-world effectiveness. Detailed public discussion is required to consider what kind of AI is acceptable rather than simply accepting what is offered, thus optimising outcomes for health systems, professionals, society and those receiving care

    AI in Law: Urgency of the Implementation of Artificial Intelligence on Law Enforcement in Indonesia

    Get PDF
    Introduction to The Problem: The advancement of Artificial Intelligence (AI) has marked the beginning of an age in digital technology, social economics, human needs, and professional conduct. A previous study shows a significant difference in the level of accuracy between Artificial Intelligence (AI) machines and human advocates in which AI machines turned out to be more accurate than advocates. However, the challenges are related to the inadequacy of laws in responding to the development of AI. Furthermore, Indonesian law enforcement officers lack awareness of the advantages of using AI to support their profession.Purpose/Objective Study: Hence, this study aims to analyze the urgency of implementing AI for law enforcement in providing legal services and the law enforcement process.Design/Methodology/Approach: The method used in this research is normative, empirical research with Statute and Conceptual Approach. Furthermore, the data uses primary and secondary data sources. Primary data was obtained through interviews with law enforcement officials. Meanwhile, secondary data sources are primary and secondary legal materials. Furthermore, it will be analyzed qualitatively and presented descriptively.Findings: Artificial Intelligence (AI) is crucial in assisting in developing services and law enforcement, especially for Indonesian law enforcement, which still relies on manual or conventional means to carry out its duties. Artificial Intelligence (AI) can bring benefits in terms of time efficiency and accuracy in assessing cases urgently needed by law enforcement. In terms of law enforcement's perception of the use of AI, they are placed as assistants who cannot entirely replace the law enforcement profession since Artificial Intelligence (AI) lacks human traits that law enforcement officers must possess.Paper Type: Research Articl

    Does your electronic butler owe you a duty of confidentiality?

    Get PDF
    As artificial intelligence (AI) advances the legal issues have not progressed in step and principles that exist have become outdated in a relatively short time. Privacy is a major concern and the myriad of devices that store data for wide ranging purposes risk breaches of privacy. Treating such a breach as a design defect or technical fault, does not reflect the complexities of legal liability that apply to robotics. Where advanced levels of AI are involved, such as with electronic butlers and carers used increasingly to assist vulnerable and ageing populations, the question of whether a robot owes a duty of confidentiality to the person for whom they are caring is becoming ever more pertinent. This question is considered in detail and it is concluded that a duty may be owed in some cases. After a brief introduction (I.) the article picks up on the aspects of legal agency and AI (II.) and examines robots as social beeings (III.), their relation- ship to duty (IV.) as well as their capacity as "extended cognition" (V.). These aspects are then brought in con- text with issues of data protection (VI.) and the general relationship between civil law, ethics and robotics (VII.) before conclusions (VIII.) are drawn
    • 

    corecore