15 research outputs found

    Empowering domain experts in developing AI: challenges of bottom-up ML development platforms

    Get PDF
    Recent trends in AI development, exemplified by innovations like automated machine learning and generative AI, have significantly increased the bottom-up organizational deployment of AI. No- and low-code AI tools empower domain experts to develop AI and thus foster organizational innovation. At the same time, the inherent opaqueness of AI, complemented by the abandonment of requirement to follow rigorous IS development and implementation methods, implies a loss of oversight over the IT for individual domain experts and their organization, and inability to account for the regulatory requirements on AI use. We build on expert knowledge of no- and low-code AI deployment in different types of organizations, and the emerging theorizing on weakly structured systems (WSS) to argue that conventional methods of software engineering and IS deployment can’t help organizations harness the risks of innovation-fostering bottom-up developments of ML tools by domain experts. In this research in progress paper we review the inherent risks and limitations of AI - opacity, explainability, bias, and controllability - in the context of ethical and regulatory requirements. We argue that maintaining human oversight is pivotal for the bottom-up ML developments to remain “under control” and suggest directions for future research on how to balance the innovation potential and risk in bottom-up ML development projects

    Keeping AI Legal

    Get PDF
    AI programs make numerous decisions on their own, lack transparency, and may change frequently. Hence, unassisted human agents, such as auditors, accountants, inspectors, and police, cannot ensure that AI-guided instruments will abide by the law. This Article suggests that human agents need the assistance of AI oversight programs that analyze and oversee operational AI programs. This Article asks whether operational AI programs should be programmed to enable human users to override them; without that, such a move would undermine the legal order. This Article also points out that AI operational programs provide high surveillance capacities and, therefore, are essential for protecting individual rights in the cyber age. This Article closes by discussing the argument that AI-guided instruments, like robots, lead to endangering much more than the legal order--that they may turn on their makers, or even destroy humanity

    Empowering Domain Experts in Developing AI: Challenges of bottom-up ML development platforms

    Get PDF
    Recent trends in AI development, exemplified by innovations like automated machine learning and generative AI, have significantly increased the bottom-up organizational deployment of AI. No- and low-code AI tools empower domain experts to develop AI and thus foster organizational innovation. At the same time, the inherent opaqueness of AI, complemented by the abandonment of requirement to follow rigorous IS development and implementation methods, implies a loss of oversight over the IT for individual domain experts and their organization, and inability to account for the regulatory requirements on AI use. We build on expert knowledge of no- and low-code AI deployment in different types of organizations, and the emerging theorizing on weakly structured systems (WSS) to argue that conventional methods of software engineering and IS deployment can’t help organizations harness the risks of innovation-fostering bottom-up developments of ML tools by domain experts. In this research in progress paper we review the inherent risks and limitations of AI - opacity, explainability, bias, and controllability - in the context of ethical and regulatory requirements. We argue that maintaining human oversight is pivotal for the bottom-up ML developments to remain “under control” and suggest directions for future research on how to balance the innovation potential and risk in bottom-up ML development projects

    Governance and Assessment of Future Spaces: A Discussion of Some Issues Raised by the Possibilities of Human-Machine Mergers

    Get PDF
    ‘In faith, I do not love thee with mine eyes, For they in thee a thousand errors note; But ‘tis my heart that loves what they despise …’ 1 This sonnet and the ancient Japanese notion of wabi-sabi view aesthetics or beauty as imperfect, impermanent and incomplete. Rather than celebrating the human diversity created by our ‘imperfections’, today's society increasingly focuses on them as ‘areas for improvement’, often via a doctor’s scalpel or the latest gadget. Developments in science, technology, engineering, mathematics and medicine (STEMM) promise a tomorrow where ‘errors’ or ‘deficiencies’ in an organism’s genetic and/or phenotypic makeup can be modulated, enhanced, corrected, redefined or eradicated by, for instance, networks of biological nanomachines. Upgraded organisms will be convolutions of organic parts, electronic components, microchips, and biomechanotronic devices. Humans 1.0, Humans 2.0 and transhumans will live in new fully immersive worlds (virtual reality), inhabit a modified real world (augmented reality), and exist with an altered body schema (mixed-reality). This future world could be a place of total technological convergence, where it may not be possible to ensure privacy of an individual’s thoughts. It could also be a place where people can be subjected to Social Engineering and manipulation, including the potential for viruses and malware infecting the brain or body, as well as new forms of external control of individuals by third parties. In this discussion paper, we will explore the potential privacy, security, and ethical issues raised by humanmachine mergers. The focus is on research, development and products at the intersection of robotics, artificial intelligence, Big Data, and smart computing. We suggest that there is a need for a more holistic approach to the assessment of technology and its governance. Additionally, we suggest that in order to determine how the law will need to respond to this particular future space, it is necessary to understand the full impacts of human-machine mergers on societies and our planet – to go beyond these three issues. Since STEMM-related activities are promising a cornucopia of future spaces, we will propose that the problems of governance and assessment require a new conception of ‘responsible research and innovation’, one that is fulfilled by our recently proposed FLE5 SH framework.2 To some extent the FLE5 SH framework can be seen as allowing the formation of a social contract, whereby all stakeholders are required to engage in a review of this wider spectrum of the possible impacts of technologies. We suggest that a Precautionary Principle approach may be of assistance in considering the impacts of technologies, remembering that especially in the context of software based systems, it is always useful to think first and bugfix later

    Instilling moral value alignment by means of multi-objective reinforcement learning

    Full text link
    AI research is being challenged with ensuring that autonomous agents learn to behave ethically, namely in alignment with moral values. Here, we propose a novel way of tackling the value alignment problem as a two-step process. The first step consists on formalising moral values and value aligned behaviour based on philosophical foundations. Our formalisation is compatible with the framework of (Multi-Objective) Reinforcement Learning, to ease the handling of an agent's individual and ethical objectives. The second step consists in designing an environment wherein an agent learns to behave ethically while pursuing its individual objective. We leverage on our theoretical results to introduce an algorithm that automates our two-step approach. In the cases where value-aligned behaviour is possible, our algorithm produces a learning environment for the agent wherein it will learn a value-aligned behaviour

    Legal and ethical implications of applications based on agreement technologies: the case of auction-based road intersections

    Full text link
    Agreement Technologies refer to a novel paradigm for the construction of distributed intelligent systems, where autonomous software agents negotiate to reach agreements on behalf of their human users. Smart Cities are a key application domain for Agreement Technologies. While several proofs of concept and prototypes exist, such systems are still far from ready for being deployed in the real-world. In this paper we focus on a novel method for managing elements of smart road infrastructures of the future, namely the case of auction-based road intersections. We show that, even though the key technological elements for such methods are already available, there are multiple non-technical issues that need to be tackled before they can be applied in practice. For this purpose, we analyse legal and ethical implications of auction-based road intersections in the context of international regulations and from the standpoint of the Spanish legislation. From this exercise, we extract a set of required modifications, of both technical and legal nature, which need to be addressed so as to pave the way for the potential real-world deployment of such systems in a future that may not be too far away

    Liability issues with Artificial Intelligence in the national and international context

    Get PDF
    The pro gradu -thesis discusses the liability issues regarding Artificial Intelligence (AI) applications, especially liability of robots and other autonomous machines, and it provides an answer to the question “Who is liable when AI makes a mistake?” This problem is looked first from the national and more individual perspective and then from an international perspective regarding the state’s responsibility and jurisdiction. The main issue can is that if, for example, a self-driving car collides with another vehicle, who then can be held liable as instead of a human, the car was driven by an algorithm. As there is no human driver, the responsible party needs to be found somewhere else, and it could be the owner or the manufacturer of the car, the software designer or at some point maybe even the AI itself. Also, no one can be blamed without reasons or applicable law, so there is a need for suitable reasoning to hold the party liable, and the legislations need to be updated to recognise the liable party regarding the new technology. These same aspects are also examined by the point of view of international law and treaties, especially regarding state jurisdiction and responsibility. The research method of this study is a qualitative and a bit legal dogmatic method. My primary sources are different articles and reports on AI liability and various international books from important international law authors as well as publications from international organisations. Also, different national and international guidelines and legislation have an integral part in regulating AI, and therefore they are also utilised and analysed. The findings of this research are that there is not just one and simple answer to the questions about the liable party, and the liable person depends greatly on the situation. The manufacturer could often be held strictly liable for any damage caused by the AI product. In addition, the owner of the product could be held liable in the same way as the owner of the animal and in the future robots could get personhood similar to the personhood of the companies, which would make the robot liable for itself. However, the legislation in this area is a bit behind from the technological development internationally and nationally. That means that the law needs to catch up with the technical development so victims can get compensated by the right liable party.Tutkielma käsittelee tekoälyä koskevia vastuukysymyksiä, liittyen erityisesti robotteihin ja muihin autonomisiin koneisiin. Tutkielma myös pyrkii vastaamaan kysymykseen ”Kuka on vastuussa, kun tekoäly tekee virheen?” Tätä kysymystä ja koko vastuuongelmaa tarkastellaan kansallisesta ja yksilöllisestä näkökulmasta sekä kansainvälisestä näkökulmasta. Pääkysymys on, että jos esimerkiksi itse ajava auto törmää toiseen ajoneuvoon, kuka on vastuussa, koska ihmisen sijaan algoritmi ajoi autoa. Koska kuljettajaa ei ole, joku muu on vastuussa oleva henkilö, joka voi olla auton omistaja tai valmistaja, ohjelmistosuunnittelija tai jopa tekoäly itse. Ketään myöskään ei voida syyttää ilman aihetta tai sovellettavaa lakia, joten vastuunalaisen henkilön löytämiseksi on oltava asianmukaiset perustelut. Myös lakien täytyy olla ajan tasalla koskien uutta teknologiaa ja siihen liittyviä vastuukysymyksiä. Näitä samoja asioita tarkastellaan myös kansainvälisen oikeuden ja sopimusten näkökulmasta, erityisesti valtioiden lainkäyttövallan ja vastuun osalta. Tutkielman metodi on laadullinen sekä osittain myös oikeusdogmaattinen. Ensisijaisina lähteinä ovat erilaiset artikkelit ja raportit tekoälyn vastuusta, kansainvälisen oikeuden kirjoitukset tärkeiltä kansainvälisen oikeuden kirjoittajilta sekä kansainvälisten järjestöjen julkaisut. Eri kansallisilla ja kansainvälisillä ohjeistuksilla ja lainsäädännöllä on myös olennainen osa tekoälyn sääntelyssä, ja siksi tutkielma tutkii myös niitä. Tutkielman johtopäätöksenä on, että vastuunalaista henkilöä koskeviin kysymyksiin ei ole vain yhtä vastausta, ja tilanteesta riippuu, kuka on vastuussa. Valmistaja on usein ankarasti vastuussa robotin aiheuttamista vahingoista. Lisäksi tuotteen omistaja voi olla mahdollista saattaa vastuuseen kuten eläimen omistaja ja tulevaisuudessa robotit voisivat saada oikeushenkilön kaltaisen henkilöllisyyden, jolloin robotti voisi olla itse vastuussa. Lainsäädäntö on kuitenkin jäljessä tekniikan kehityksestä niin kansainvälisesti kuin kansallisesti tekoälyn suhteen. Lain tulisi pysyä tekniikan kehityksen mukana, jotta vastuussa oleva henkilö voidaan löytää ja vahingonkärsijä saa korvauksensa

    Hva skal vi med autonome våpen? En litteraturstudie om autonome våpensystemer

    Get PDF
    At fremtiden er fylt med autonome systemer som for kun kort tid siden virket som Sci-fi er i dag relativt anerkjent. Denne forandringen skjer også i militæret (Boulanin & Verbruggen, 2017). I all ydmykhet har USA ved flere anledninger satt robotvåpen på anbudsrunder og tilsynelatende fått svar. I 2015 slo AlphaGO Lee Sedol i GO og for mange markerte starten på en utvikling av kunstig intelligens som vi ikke vet helt enden på. Denne studien setter søkelyset på hvilke aspekter av denne utviklingen som vil påvirke våre doktriner. Er disse våpnene bare en bedre versjon av sine tidligere former? Denne studien tar tak i 1945 artikler og påfølgende analyse av dem. Studien er en induktiv litteraturstudie, som søker å utforske hva vi skal benytte autonome våpen til. Påfølgende analyse viser at fagfeltet er i stor utvikling og det trolig vil være behov for mer forskning for å kunne si noe definitivt. Det fremstår som svært sannsynlig at flere og flere deler av militæret vil bli autonomt eller delautonomt i faser. De største funnet i oppgaven er 3 aspekter som mulig kan påvirke militærteori og doktriner: Første er svermkonsept med ett stort antall enheter som ikke trenger å være like. Robuste nettverk og maskinlæring gjør maskinene i stand til å samarbeide på ett nivå man ikke ser for seg at mennesker kan. Med svermer av enheter skal man kunne overvelde enhver fiende. Andre er autonom klassifisering av objekter og personer. Med maskinlæring ser vi at maskinlæring kan ut ifra mye støy og vanskelige omgivelser gi nøyaktige prediksjoner og klassifiseringer. Nyere systemer er også i stand til å lære etter som det ser og får oppfølging. Tredje er Hyperwar, en tilstand hvor mennesket trolig ikke kan delta i beslutningene for bruk av militærmakt. Hyperwar referer til ett scenario hvor autonome våpen har blitt så effektive at å inkludere mennesker kun blir ett forsinkende ledd. Derav blir et menneskestyrt militær, en makt som alltid vil tape i møte mot en autonom militærmakt

    Reinforcement Learning for Value Alignment

    Full text link
    [eng] As autonomous agents become increasingly sophisticated and we allow them to perform more complex tasks, it is of utmost importance to guarantee that they will act in alignment with human values. This problem has received in the AI literature the name of the value alignment problem. Current approaches apply reinforcement learning to align agents with values due to its recent successes at solving complex sequential decision-making problems. However, they follow an agent-centric approach by expecting that the agent applies the reinforcement learning algorithm correctly to learn an ethical behaviour, without formal guarantees that the learnt ethical behaviour will be ethical. This thesis proposes a novel environment-designer approach for solving the value alignment problem with theoretical guarantees. Our proposed environment-designer approach advances the state of the art with a process for designing ethical environments wherein it is in the agent's best interest to learn ethical behaviours. Our process specifies the ethical knowledge of a moral value in terms that can be used in a reinforcement learning context. Next, our process embeds this knowledge in the agent's learning environment to design an ethical learning environment. The resulting ethical environment incentivises the agent to learn an ethical behaviour while pursuing its own objective. We further contribute to the state of the art by providing a novel algorithm that, following our ethical environment design process, is formally guaranteed to create ethical environments. In other words, this algorithm guarantees that it is in the agent's best interest to learn value- aligned behaviours. We illustrate our algorithm by applying it in a case study environment wherein the agent is expected to learn to behave in alignment with the moral value of respect. In it, a conversational agent is in charge of doing surveys, and we expect it to ask the users questions respectfully while trying to get as much information as possible. In the designed ethical environment, results confirm our theoretical results: the agent learns an ethical behaviour while pursuing its individual objective.[cat] A mesura que els agents autònoms es tornen cada cop més sofisticats i els permetem realitzar tasques més complexes, és de la màxima importància garantir que actuaran d'acord amb els valors humans. Aquest problema ha rebut a la literatura d'IA el nom del problema d'alineació de valors. Els enfocaments actuals apliquen aprenentatge per reforç per alinear els agents amb els valors a causa dels seus èxits recents a l'hora de resoldre problemes complexos de presa de decisions seqüencials. Tanmateix, segueixen un enfocament centrat en l'agent en esperar que l'agent apliqui correctament l'algorisme d'aprenentatge de reforç per aprendre un comportament ètic, sense garanties formals que el comportament ètic après serà ètic. Aquesta tesi proposa un nou enfocament de dissenyador d'entorn per resoldre el problema d'alineació de valors amb garanties teòriques. El nostre enfocament de disseny d'entorns proposat avança l'estat de l'art amb un procés per dissenyar entorns ètics en què és del millor interès de l'agent aprendre comportaments ètics. El nostre procés especifica el coneixement ètic d'un valor moral en termes que es poden utilitzar en un context d'aprenentatge de reforç. A continuació, el nostre procés incorpora aquest coneixement a l'entorn d'aprenentatge de l'agent per dissenyar un entorn d'aprenentatge ètic. L'entorn ètic resultant incentiva l'agent a aprendre un comportament ètic mentre persegueix el seu propi objectiu. A més, contribuïm a l'estat de l'art proporcionant un algorisme nou que, seguint el nostre procés de disseny d'entorns ètics, està garantit formalment per crear entorns ètics. En altres paraules, aquest algorisme garanteix que és del millor interès de l'agent aprendre comportaments alineats amb valors. Il·lustrem el nostre algorisme aplicant-lo en un estudi de cas on s'espera que l'agent aprengui a comportar-se d'acord amb el valor moral del respecte. En ell, un agent de conversa s'encarrega de fer enquestes, i esperem que faci preguntes als usuaris amb respecte tot intentant obtenir la màxima informació possible. En l'entorn ètic dissenyat, els resultats confirmen els nostres resultats teòrics: l'agent aprèn un comportament ètic mentre persegueix el seu objectiu individual
    corecore