48,856 research outputs found

    Autonomous Cars – What Lies Behind the Lack of Readiness

    Get PDF
    Autonomous systems are already available for public and private transport. The necessary hardware and software products have been created, and novel designs for (semi-) autonomous vehicles are launched every year, but their use is limited, and the penetration is not increasing rapidly. While this might be owing to their high price, their perception is also not universally positive. Many are afraid of not only using, but being around them. After introducing the relevant literature on trust in autonomous vehicles and the factors affecting it, the current article presents the data of an international quantitative research of 666 people. It highlights the biggest perceived threats and their prevalence, and also tries to uncover why more than half of the respondents are afraid of autonomous vehicles. In line with the data presented in the article, the topic is gendered – male respondents were more open towards autonomous vehicles. Furthermore, those who are not ready for autonomous vehicles have a generally higher level of fear of potential negative consequences, such as hacker attacks, system malfunctions, or lack of control. On the other hand, those in favour of automated vehicles believe that they have a positive effect on the occurrence of accidents, owing to their heightened reaction speed provided by the sensory system and the computing capacity which is far superior to that of humans, as well as on the society, on carbon emission, and, as a result, on our natural environment. Consequently, autonomous vehicles could form an important element of the transport systems of future smart cities

    Sociology Between the Gaps Volume 3 (2017)

    Get PDF

    Civil liability for artificial intelligence products versus the sustainable development of CEECs: which institutions matter?

    Get PDF
    The aim of this paper is to conduct a meta-analysis of the EU and CEECs civil liability institutions in order to find out if they are ready for the Artificial Intelligence (AI) race. Particular focus is placed on ascertaining whether civil liability institutions such as the Product Liability Directive (EU) or civil codes (CEECs) will protect consumers and entrepreneurs, as well as ensure undistorted competition. In line with the aforementioned, the authors investigate whether the civil liability institutions of the EU and CEECs are based on regulations that can be adapted to the new generation of robots that will be equipped with learning abilities and have a certain degree of unpredictability in their behaviour. The conclusion presented in the paper was drawn on the basis of a review of the current literature and research on national and European regulations. The primary contribution that this article makes is to advance the current of the research concerning the concepts of AI liability for damage and personal injury. A second contribution is to show that the current civil liability institutions of the EU as well as the CEECs are not sufficiently prepared to address the legal issues that will  start to arise when self-driving vehicles or autonomous drones begin operating in fully autonomous modes and possibly cause property damage or personal injury

    A comparison among deep learning techniques in an autonomous driving context

    Get PDF
    Al giorno d’oggi, l’intelligenza artificiale è uno dei campi di ricerca che sta ricevendo sempre più attenzioni. Il miglioramento della potenza computazionale a disposizione dei ricercatori e sviluppatori sta rinvigorendo tutto il potenziale che era stato espresso a livello teorico agli albori dell’Intelligenza Artificiale. Tra tutti i campi dell’Intelligenza Artificiale, quella che sta attualmente suscitando maggiore interesse è la guida autonoma. Tantissime case automobilistiche e i più illustri college americani stanno investendo sempre più risorse su questa tecnologia. La ricerca e la descrizione dell’ampio spettro delle tecnologie disponibili per la guida autonoma è parte del confronto svolto in questo elaborato. Il caso di studio si incentra su un’azienda che partendo da zero, vorrebbe elaborare un sistema di guida autonoma senza dati, in breve tempo ed utilizzando solo sensori fatti da loro. Partendo da reti neurali e algoritmi classici, si è arrivati ad utilizzare algoritmi come A3C per descrivere tutte l’ampio spettro di possibilità. Le tecnologie selezionate verranno confrontate in due esperimenti. Il primo è un esperimento di pura visione artificiale usando DeepTesla. In questo esperimento verranno confrontate tecnologie quali le tradizionali tecniche di visione artificiale, CNN e CNN combinate con LSTM. Obiettivo è identificare quale algoritmo ha performance migliori elaborando solo immagini. Il secondo è un esperimento su CARLA, un simulatore basato su Unreal Engine. In questo esperimento, i risultati ottenuti in ambiente simulato con CNN combinate con LSTM, verranno confrontati con i risultati ottenuti con A3C. Obiettivo sarà capire se queste tecniche sono in grado di muoversi in autonomia utilizzando i dati forniti dal simulatore. Il confronto mira ad identificare le criticità e i possibili miglioramenti futuri di ciascuno degli algoritmi proposti in modo da poter trovare una soluzione fattibile che porta ottimi risultati in tempi brevi

    The Google Made Me Do It: The Complexity of Criminal Liability in the Age of Autonomous Vehicles

    Get PDF
    Article published in the Michigan State Law Review

    Intelligent capacities in artificial systems

    Get PDF
    This paper investigates the nature of dispositional properties in the context ofartificial intelligence systems. We start by examining the distinctive features of natural dispositions according to criteria introduced by McGeer (2018) for distinguishing between object-centered dispositions (i.e., properties like ‘fragility’) and agent-based abilities, including both ‘habits’ and ‘skills’ (a.k.a. ‘intelligent capacities’, Ryle 1949). We then explore to what extent the distinction applies to artificial dispositions in the context of two very different kinds of artificial systems, one based on rule-based classical logic and the other on reinforcement learning. Here we defend three substantive claims. First, we argue that artificial systems are not equal in the kinds of dispositional properties they instantiate. In particular, we show that logical systems instantiate merely object-centered dispositions whereas reinforcement learning systems allow for the instantiation of agent-based abilities. Second, we explore the similarities and differences between the agent-centered abilities of artificial systems and those of humans, especially as relates to the important distinction made in the human case between habits and skills/intelligent capacities. The upshot is that the agent-centered abilities of truly intelligent artificial systems are distinctive enough to constitute a third type of agent-based ability — blended agent-based ability — raising substantial questions as to how we understand the nature of their agency. Third, we explore one aspect of this problem, focussing on whether systems of this type are properly considered ‘responsible agents’, at least in some contexts and for some purposes. The ramifications of our analysis will turn out to be directly relevant to various ethical concerns of artificial intelligence
    • …
    corecore