138 research outputs found

    Towards Safer Self-Driving Through Great PAIN (Physically Adversarial Intelligent Networks)

    Full text link
    Automated vehicles' neural networks suffer from overfit, poor generalizability, and untrained edge cases due to limited data availability. Researchers synthesize randomized edge-case scenarios to assist in the training process, though simulation introduces potential for overfit to latent rules and features. Automating worst-case scenario generation could yield informative data for improving self driving. To this end, we introduce a "Physically Adversarial Intelligent Network" (PAIN), wherein self-driving vehicles interact aggressively in the CARLA simulation environment. We train two agents, a protagonist and an adversary, using dueling double deep Q networks (DDDQNs) with prioritized experience replay. The coupled networks alternately seek-to-collide and to avoid collisions such that the "defensive" avoidance algorithm increases the mean-time-to-failure and distance traveled under non-hostile operating conditions. The trained protagonist becomes more resilient to environmental uncertainty and less prone to corner case failures resulting in collisions than the agent trained without an adversary

    Autonomous Exchanges: Human-Machine Autonomy in the Automated Media Economy

    Get PDF
    Contemporary discourses and representations of automation stress the impending “autonomy” of automated technologies. From pop culture depictions to corporate white papers, the notion of autonomous technologies tends to enliven dystopic fears about the threat to human autonomy or utopian potentials to help humans experience unrealized forms of autonomy. This project offers a more nuanced perspective, rejecting contemporary notions of automation as inevitably vanquishing or enhancing human autonomy. Through a discursive analysis of industrial “deep texts” that offer considerable insights into the material development of automated media technologies, I argue for contemporary automation to be understood as a field for the exchange of autonomy, a human-machine autonomy in which autonomy is exchanged as cultural and economic value. Human-machine autonomy is a shared condition among humans and intelligent machines shaped by economic, legal, and political paradigms with a stake in the cultural uses of automated media technologies. By understanding human-machine autonomy, this project illuminates complications of autonomy emerging from interactions with automated media technologies across a range of cultural contexts

    Structured machine learning models for robustness against different factors of variability in robot control

    Get PDF
    An important feature of human sensorimotor skill is our ability to learn to reuse them across different environmental contexts, in part due to our understanding of attributes of variability in these environments. This thesis explores how the structure of models used within learning for robot control could similarly help autonomous robots cope with variability, hence achieving skill generalisation. The overarching approach is to develop modular architectures that judiciously combine different forms of inductive bias for learning. In particular, we consider how models and policies should be structured in order to achieve robust behaviour in the face of different factors of variation - in the environment, in objects and in other internal parameters of a policy - with the end goal of more robust, accurate and data-efficient skill acquisition and adaptation. At a high level, variability in skill is determined by variations in constraints presented by the external environment, and in task-specific perturbations that affect the specification of optimal action. A typical example of environmental perturbation would be variation in lighting and illumination, affecting the noise characteristics of perception. An example of task perturbations would be variation in object geometry, mass or friction, and in the specification of costs associated with speed or smoothness of execution. We counteract these factors of variation by exploring three forms of structuring: utilising separate data sets curated according to the relevant factor of variation, building neural network models that incorporate this factorisation into the very structure of the networks, and learning structured loss functions. The thesis is comprised of four projects exploring this theme within robotics planning and prediction tasks. Firstly, in the setting of trajectory prediction in crowded scenes, we explore a modular architecture for learning static and dynamic environmental structure. We show that factorising the prediction problem from the individual representations allows for robust and label efficient forward modelling, and relaxes the need for full model re-training in new environments. This modularity explicitly allows for a more flexible and interpretable adaptation of trajectory prediction models to using pre-trained state of the art models. We show that this results in more efficient motion prediction and allows for performance comparable to the state-of-the-art supervised 2D trajectory prediction. Next, in the domain of contact-rich robotic manipulation, we consider a modular architecture that combines model-free learning from demonstration, in particular dynamic movement primitives (DMP), with modern model-free reinforcement learning (RL), using both on-policy and off-policy approaches. We show that factorising the skill learning problem to skill acquisition and error correction through policy adaptation strategies such as residual learning can help improve the overall performance of policies in the context of contact-rich manipulation. Our empirical evaluation demonstrates how to best do this with DMPs and propose “residual Learning from Demonstration“ (rLfD), a framework that combines DMPs with RL to learn a residual correction policy. Our evaluations, performed both in simulation and on a physical system, suggest that applying residual learning directly in task space and operating on the full pose of the robot can significantly improve the overall performance of DMPs. We show that rLfD offers a gentle to the joints solution that improves the task success and generalisation of DMPs. Last but not least, our study shows that the extracted correction policies can be transferred to different geometries and frictions through few-shot task adaptation. Third, we employ meta learning to learn time-invariant reward functions, wherein both the objectives of a task (i.e., the reward functions) and the policy for performing that task optimally are learnt simultaneously. We propose a novel inverse reinforcement learning (IRL) formulation that allows us to 1) vary the length of execution by learning time-invariant costs, and 2) relax the temporal alignment requirements for learning from demonstration. We apply our method to two different types of cost formulations and evaluate their performance in the context of learning reward functions for simulated placement and peg in hole tasks executed on a 7DoF Kuka IIWA arm. Our results show that our approach enables learning temporally invariant rewards from misaligned demonstration that can also generalise spatially to out of distribution tasks. Finally, we employ our observations to evaluate adversarial robustness in the context of transfer learning from a source trained on CIFAR 100 to a target network trained on CIFAR 10. Specifically, we study the effects of using robust optimisation in the source and target networks. This allows us to identify transfer learning strategies under which adversarial defences are successfully retained, in addition to revealing potential vulnerabilities. We study the extent to which adversarially robust features can preserve their defence properties against black and white-box attacks under three different transfer learning strategies. Our empirical evaluations give insights on how well adversarial robustness under transfer learning can generalise.
    corecore