71,951 research outputs found

    The Impact of Social Influence, Technophobia, and Perceived Safety on Autonomous Vehicle Technology Adoption

    Get PDF
    The objective of this study was to determine whether there was a relationship between social influence, technophobia, perceived safety of autonomous vehicle technology, number of automobile-related accidents and the intention to use autonomous vehicles. The methodology was a descriptive, cross-sectional, correlational study. Theory of Planned Behavior provided the underlying theoretical framework. An online survey was the primary method of data collection. Pearson’s correlation and multiple linear regression were used for data analysis. This study found that both social influence and perceived safety of autonomous vehicle technology had significant, positive relationships with the intention to use autonomous vehicles. Additionally, a significant negative relationship was found among technophobia and intention to use autonomous vehicles. However, no relationship was found between the number of automobile-related accidents and intention to use autonomous vehicles. This study presents several original and significant findings as a contribution to the literature on autonomous vehicle technology adoption and proposes new dimensions of future research within this emerging field

    Human cooperation when acting through autonomous machines

    Get PDF
    Recent times have seen an emergence of intelligent machines that act autonomously on our behalf, such as autonomous vehicles. Despite promises of increased efficiency, it is not clear whether this paradigm shift will change how we decide when our self-interest (e.g., comfort) is pitted against the collective interest (e.g., environment). Here we show that acting through machines changes the way people solve these social dilemmas and we present experimental evidence showing that participants program their autonomous vehicles to act more cooperatively than if they were driving themselves. We show that this happens because programming causes selfish short-term rewards to become less salient, leading to considerations of broader societal goals. We also show that the programmed behavior is influenced by past experience. Finally, we report evidence that the effect generalizes beyond the domain of autonomous vehicles. We discuss implications for designing autonomous machines that contribute to a more cooperative society

    Trust and Control in Autonomous Vehicle Interactions

    Full text link
    Autonomous vehicles have the potential to reduce traffic accidents and improve road safety. Ironically, public skepticism due to the risk and safety considerations remains one of the major barriers to the widespread adoption of autonomous vehicles. Therefore, trust is vital to the promotion of the acceptance of this new technology. This abstract summarizes some recent research on trust in autonomous vehicles and our proposal to promote trust between autonomous vehicles and pedestrians. We aim to develop a trust framework based on expectations, behaviors and communication between the pedestrian and the autonomous vehicle. We describe a user study that is designed to determine the effects of driving behavior and situational characteristics on a pedestrian’s trust in autonomous vehicles.Toyota Research InstitutePeer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/137661/5/RSS_2017_Morality Workshop.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/137661/6/Poster_RSS_2017_Morality Workshop.pdfDescription of RSS 2017 Poster TRI Review.pptx : RSS 2017, Morality and Social Trust in Autonomous Robots, PosterDescription of TRI_RSS_Abstract_Submitted for Review.docx : RSS 2017, Morality and Social Trust in Autonomous Robots, Extended AbstractDescription of RSS_2017_Morality Workshop.pdf : Poster abstract presented at "Morality and Social Trust in Autonomy Workshop" at Robotics Science and Systems Conference, 2017Description of Poster_RSS_2017_Morality Workshop.pdf : Poster presented at "Morality and Social Trust in Autonomy Workshop" at Robotics Science and Systems Conference, 201

    Socially Compliant Navigation through Raw Depth Inputs with Generative Adversarial Imitation Learning

    Full text link
    We present an approach for mobile robots to learn to navigate in dynamic environments with pedestrians via raw depth inputs, in a socially compliant manner. To achieve this, we adopt a generative adversarial imitation learning (GAIL) strategy, which improves upon a pre-trained behavior cloning policy. Our approach overcomes the disadvantages of previous methods, as they heavily depend on the full knowledge of the location and velocity information of nearby pedestrians, which not only requires specific sensors, but also the extraction of such state information from raw sensory input could consume much computation time. In this paper, our proposed GAIL-based model performs directly on raw depth inputs and plans in real-time. Experiments show that our GAIL-based approach greatly improves the safety and efficiency of the behavior of mobile robots from pure behavior cloning. The real-world deployment also shows that our method is capable of guiding autonomous vehicles to navigate in a socially compliant manner directly through raw depth inputs. In addition, we release a simulation plugin for modeling pedestrian behaviors based on the social force model.Comment: ICRA 2018 camera-ready version. 7 pages, video link: https://www.youtube.com/watch?v=0hw0GD3lkA

    Learning-based social coordination to improve safety and robustness of cooperative autonomous vehicles in mixed traffic

    Full text link
    It is expected that autonomous vehicles(AVs) and heterogeneous human-driven vehicles(HVs) will coexist on the same road. The safety and reliability of AVs will depend on their social awareness and their ability to engage in complex social interactions in a socially accepted manner. However, AVs are still inefficient in terms of cooperating with HVs and struggle to understand and adapt to human behavior, which is particularly challenging in mixed autonomy. In a road shared by AVs and HVs, the social preferences or individual traits of HVs are unknown to the AVs and different from AVs, which are expected to follow a policy, HVs are particularly difficult to forecast since they do not necessarily follow a stationary policy. To address these challenges, we frame the mixed-autonomy problem as a multi-agent reinforcement learning (MARL) problem and propose an approach that allows AVs to learn the decision-making of HVs implicitly from experience, account for all vehicles' interests, and safely adapt to other traffic situations. In contrast with existing works, we quantify AVs' social preferences and propose a distributed reward structure that introduces altruism into their decision-making process, allowing the altruistic AVs to learn to establish coalitions and influence the behavior of HVs.Comment: arXiv admin note: substantial text overlap with arXiv:2202.0088
    • …
    corecore