6,958 research outputs found

    Ethical Challenges in Data-Driven Dialogue Systems

    Full text link
    The use of dialogue systems as a medium for human-machine interaction is an increasingly prevalent paradigm. A growing number of dialogue systems use conversation strategies that are learned from large datasets. There are well documented instances where interactions with these system have resulted in biased or even offensive conversations due to the data-driven training process. Here, we highlight potential ethical issues that arise in dialogue systems research, including: implicit biases in data-driven systems, the rise of adversarial examples, potential sources of privacy violations, safety concerns, special considerations for reinforcement learning systems, and reproducibility concerns. We also suggest areas stemming from these issues that deserve further investigation. Through this initial survey, we hope to spur research leading to robust, safe, and ethically sound dialogue systems.Comment: In Submission to the AAAI/ACM conference on Artificial Intelligence, Ethics, and Societ

    Human Motion Trajectory Prediction: A Survey

    Full text link
    With growing numbers of intelligent autonomous systems in human environments, the ability of such systems to perceive, understand and anticipate human behavior becomes increasingly important. Specifically, predicting future positions of dynamic agents and planning considering such predictions are key tasks for self-driving vehicles, service robots and advanced surveillance systems. This paper provides a survey of human motion trajectory prediction. We review, analyze and structure a large selection of work from different communities and propose a taxonomy that categorizes existing methods based on the motion modeling approach and level of contextual information used. We provide an overview of the existing datasets and performance metrics. We discuss limitations of the state of the art and outline directions for further research.Comment: Submitted to the International Journal of Robotics Research (IJRR), 37 page

    Detection of Physical Adversarial Attacks on Traffic Signs for Autonomous Vehicles

    Get PDF
    Current vision-based detection models within Autonomous Vehicles, can be susceptible to changes within the physical environment, which cause unexpected issues. Physical attacks on traffic signs could be malicious or naturally occurring, causing incorrect identification of the traffic sign which can drastically alter the behaviour of the autonomous vehicle. We propose two novel deep learning architectures which can be used as detection and mitigation strategy for environmental attacks. The first is an autoencoder which detects anomalies within a given traffic sign, and the second is a reconstruction model which generates a clean traffic sign without any anomalies. As the anomaly detection model has been trained on normal images, any abnormalities will provide a high reconstruction error value, indicating an abnormal traffic sign. The reconstruction model is a Generative Adversarial Network (GAN) and consists of two networks; a generator and a discriminator. These map the input traffic sign image into a meta representation as the output. By using anomaly detection and reconstruction models as mitigation strategies, we show that the performance of the other models in pipelines such as traffic sign recognition models can be significantly improved. In order to evaluate our models, several types of attack circumstances were designed and on average, the anomaly detection model achieved 0.84 accuracy with a 0.82 F1-score in real datasets whereas the reconstruction model improved performance of traffic sign recognition model from average F1-score 0.41 to 0.641
    • …
    corecore