32 research outputs found

    Dirbtinio intelekto juridinis asmuo: už, prieš, susilaikyti?

    Get PDF
    This article is about the legal personhood of artificial intelligence as one of the existing options of regulating AI and coping with the challenges arising out of its functioning. It begins with the search for the definition of AI and goes on to consider the arguments against the legal personhood of AI, the options of such a legal personhood, and the factors taken into account in devising the legal personhood of AI. The article ends with our vision of the legal personhood of AI.Šiame straipsnyje rašoma apie dirbtinio intelekto juridinio asmens statusą kaip vieną iš esamų dirbtinio intelekto reguliavimo galimybių ir būdą susidoroti su iššūkiais, kylančiais dėl jo veikimo. Pradedama nuo dirbtinio intelekto apibrėžties paieškų, toliau nagrinėjami argumentai už ir prieš dirbtinio intelekto juridinio asmens statusą ir veiksniai, į kuriuos turėtų būti atsižvelgta kuriant dirbtinio intelekto juridinio asmens statusą. Straipsnis baigiamas autorės nuomone dėl dirbtinio intelekto juridinio asmens statuso

    Civil liability for artificial intelligence products versus the sustainable development of CEECs: which institutions matter?

    Get PDF
    The aim of this paper is to conduct a meta-analysis of the EU and CEECs civil liability institutions in order to find out if they are ready for the Artificial Intelligence (AI) race. Particular focus is placed on ascertaining whether civil liability institutions such as the Product Liability Directive (EU) or civil codes (CEECs) will protect consumers and entrepreneurs, as well as ensure undistorted competition. In line with the aforementioned, the authors investigate whether the civil liability institutions of the EU and CEECs are based on regulations that can be adapted to the new generation of robots that will be equipped with learning abilities and have a certain degree of unpredictability in their behaviour. The conclusion presented in the paper was drawn on the basis of a review of the current literature and research on national and European regulations. The primary contribution that this article makes is to advance the current of the research concerning the concepts of AI liability for damage and personal injury. A second contribution is to show that the current civil liability institutions of the EU as well as the CEECs are not sufficiently prepared to address the legal issues that will  start to arise when self-driving vehicles or autonomous drones begin operating in fully autonomous modes and possibly cause property damage or personal injury

    Robot citizenship and gender (in)equality: the case of Sophia the robot in Saudi Arabia

    Get PDF
    On the 25 th of October 2017, Sophia, the humanoid robot created by Hanson Robotics, was declared an official Saudi citizen during the Summit on Future Investment Initiative in Riyadh, Saudi Arabia. Since Saudi Arabia is known for still holding onto strong religious as well as conservative values and for still classifying Saudi women as second-class citizens, it seems quite peculiar that the Kingdom would grant the official citizenship status to a female-looking non-human being. In other words, this specific decision has come to highlight the deeply rooted gender disparities in the Kingdom even more, especially as Saudi women face a constant battle for their recognition as official Saudi citizens and for the concession of their basic human rights. Although, on the one hand, Saudi Arabia has been trying to picture themselves as trying to make steps forward in what the Western world would consider the right direction regarding the evolution of Saudi women’s rights through, for instance, the publication of more progressive reform programs such as Vision 2030, the Kingdom is, on the other hand, simultaneously repressing Saudi women’s active resistance against the patriarchal Saudi traditions. So, while Sophia the robot was granted the official citizenship status effortlessly and very rapidly, Saudi women are actively protesting for their rights. This article is based on an explorative approach of the existent literature as it intends to study the Saudi government’s unique decision of granting Sophia the Saudi citizenship; and to prospect Saudi women activists’ current struggles against the government and the muttawas, the Islamic religious police, in their fight for equal rights compared to Sophia’s situation. Thus, the present article will briefly mention the reasons why Sophia was granted this status and demonstrate how the treatment of Saudi women activists does not comply with the progressive image Saudi Arabia is trying to portray

    The Good, the Bad, and the Invisible with Its Opportunity Costs: Introduction to the ‘J’ Special Issue on “the Impact of Artificial Intelligence on Law”

    Get PDF
    Scholars and institutions have been increasingly debating the moral and legal challenges of AI, together with the models of governance that should strike the balance between the opportunities and threats brought forth by AI, its ‘good’ and ‘bad’ facets. There are more than a hundred declarations on the ethics of AI and recent proposals for AI regulation, such as the European Commission’s AI Act, have further multiplied the debate. Still, a normative challenge of AI is mostly overlooked, and regards the underuse, rather than the misuse or overuse, of AI from a legal viewpoint. From health care to environmental protection, from agriculture to transportation, there are many instances of how the whole set of benefits and promises of AI can be missed or exploited far below its full potential, and for the wrong reasons: business disincentives and greed among data keepers, bureaucracy and professional reluctance, or public distrust in the era of no-vax conspiracies theories. The opportunity costs that follow this technological underuse is almost terra incognita due to the ‘invisibility’ of the phenomenon, which includes the ‘shadow prices’ of economy. This introduction provides metrics for such assessment and relates this work to the development of new standards for the field. We must quantify how much it costs not to use AI systems for the wrong reasons

    Civil liability for artificial intelligence products versus the sustainable development of CEECs: which institutions matter?

    Get PDF
    The aim of this paper is to conduct a meta-analysis of the EU and CEECs civil liability institutions in order to find out if they are ready for the Artificial Intelligence (AI) race. Particular focus is placed on ascertaining whether civil liability institutions such as the Product Liability Directive (EU) or civil codes (CEECs) will protect consumers and entrepreneurs, as well as ensure undistorted competition. In line with the aforementioned, the authors investigate whether the civil liability institutions of the EU and CEECs are based on regulations that can be adapted to the new generation of robots that will be equipped with learning abilities and have a certain degree of unpredictability in their behaviour. The conclusion presented in the paper was drawn on the basis of a review of the current literature and research on national and European regulations. The primary contribution that this article makes is to advance the current of the research concerning the concepts of AI liability for damage and personal injury. A second contribution is to show that the current civil liability institutions of the EU as well as the CEECs are not sufficiently prepared to address the legal issues that will  start to arise when self-driving vehicles or autonomous drones begin operating in fully autonomous modes and possibly cause property damage or personal injury

    Social Robots: The case of Robot Sophia

    Get PDF
    In recent years there has been an increasing use of robots in various areas of public and private life. This leads to a set of ethical, social and interpersonal dilemmas. The subject of this paper concerns attitudes towards social robots, starting with the issue of citizenship and moving on to the roles that can be attributed to a robot with Artificial Intelligence. A central example was the social robot Sophia, which has artificial intelligence, and it has been given to it a citizenship. In order to investigate the issue, a literature review of previous research articles related to social robots was initially conducted. An attitudes’ questionnaire was then constructed, to which 137 participants answered. The results showed that, the majority of the sample did not want robots to acquire citizenship and equal rights as humans, nor did they want robots to be used for roles involved in interpersonal relationships, such as raising children, work, friendship, or love. In general, it has been observed that the research sample was not particularly prepared for the existence of social robots in society, while they have been associated with negative or malicious purposes. Gender and age also play an important role regarding attitudes towards social robots. However, this is an open issue that leaves much more unanswered questions and concerns

    Hybrid theory of corporate legal personhood and its application to artificial intelligence

    Get PDF
    Artificial intelligence (AI) is often compared to corporations in legal studies when discussing AI legal personhood. This article also uses this analogy between AI and companies to study AI legal personhood but contributes to the discussion by utilizing the hybrid model of corporate legal personhood. The hybrid model simultaneously applies the real entity, aggregate entity, and artificial entity models. This article adopts a legalistic position, in which anything can be a legal person. However, there might be strong pragmatic reasons not to confer legal personhood on non-human entities. The article recognizes that artificial intelligence is autonomous by definition and has greater de facto autonomy than corporations and, consequently, greater potential for de jure autonomy. Therefore, AI has a strong attribute to be a real entity. Nevertheless, the article argues that AI has key characteristics from the aggregate entity and artificial entity models. Therefore, the hybrid entity model is more applicable to AI legal personhood than any single model alone. The discussion recognises that AI might be too autonomous for legal personhood. Still, it concludes that the hybrid model is a useful analytical framework as it incorporates legal persons with different levels of de jure and de facto autonomy.publishedVersionPeer reviewe

    Direito autoral de criações feitas por inteligência artificial: diferentes percepções para o mesmo dilema

    Get PDF
    A Inteligência Artificial (IA) tem possibilitado grandes avanços em diversos campos da ciência e atividades do cotidiano da sociedade, principalmente pela capacidade de realizar tarefas de forma mais rápida e mais eficaz em comparação ao desempenho humano, entretanto existe um dilema acerca da autoria de uma criação feita por IA. Para esclarecer esta questão, este trabalho foi idealizado com o objetivo de identificar as diferentes percepções sobre o tratamento dado ao direito autoral de criações de IA, do ponto de vista de estudiosos do campo da Propriedade Intelectual, cortes judiciais e o poder legislativo de alguns países e as disposições legais brasileiras atualmente disponíveis para solucionar esse impasse no Brasil. A metodologia utilizada no trabalho foi a pesquisa bibliográfica e documental, para coletar artigos científicos e documentos governamentais que tratassem da temática. Os resultados demonstraram que existem quatro posicionamentos distintos para o autor: o criador da IA, o usuário da IA, o criador do banco de dados que alimentou a IA e ninguém, ou seja, a obra fará parte do domínio público, sendo esta a opção que mais obteve registros na pesquisa realizada em dez países pesquisados

    L’intelligenza artificiale (IA) e le regole. Appunti

    Get PDF
    Digital innovations mark our age. Virtually every professional and social activity involves artificial intelligence systems. Nevertheless, there is no definition of artificial intelligence and there is no uniform legal discipline of such innovations. This paper addresses the current issue of artificial intelligence rules, investigating the different positions of the Authors and recalling the 2017 Resolution of the European Parliament on robotics and civil law. In the first part, the opportunity to regulate these IA applications is discussed and Stefano Rodotà’s lesson on law and new technologies is resumed. Subsequently, the essay identifies the most critical aspects of the application of traditional legal categories to these new digital phenomena. Specific attention is paid to the so-called responsibility gaps and the impact of IA applications on fundamental rights. In the final part, we identify the most suitable principles to regulate the phenomenon as proposed by the European legal doctrine
    corecore