138 research outputs found

    Discovering Factors Affecting Student Retention in Computer Science at Cal Poly

    Get PDF

    Hamiltonian mappings and circle packing phase spaces: numerical investigations

    Full text link
    In a previous paper we introduced examples of Hamiltonian mappings with phase space structures resembling circle packings. We now concentrate on one particular mapping and present numerical evidence which supports the conjecture that the set of circular resonance islands is dense in phase space.Comment: 9 pages, 2 figure

    Designing and Evaluating Physical Adversarial Attacks and Defenses for Machine Learning Algorithms

    Full text link
    Studies show that state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input in a calculated fashion. These perturbations induce mistakes in the network's output. However, despite the large interest and numerous works, there have only been limited studies on the impact of adversarial attacks in the physical world. Furthermore, these studies lack well-developed, robust methodologies for attacking real physical systems. In this dissertation, we first explore the technical requirements for generating physical adversarial inputs through the manipulation of physical objects. Based on our analysis, we design a new adversarial attack algorithm, Robust Physical Perturbations (RPP) that consistently computes the necessary modifications to ensure the modified object remains adversarial across numerous varied viewpoints. We show that the RPP attack results in physical adversarial inputs for classification tasks as well as object detection tasks, which, prior to our work, were considered to be resistant. We, then, develop a defensive technique, robust feature augmentation, to mitigate the effect of adversarial inputs, both digitally and physically. We hypothesize the input to a machine learning algorithm contains predictive feature information that a bounded adversary is unable to manipulate in order to cause classification errors. By identifying and extracting this adversarially robust feature information, we can obtain evidence of the possible set of correct output labels and adjust the classification decision accordingly. As adversarial inputs are a human-defined phenomenon, we utilize human-recognizable features to identify adversarially robust, predictive feature information for a given problem domain. Due to the safety-critical nature of autonomous driving, we focus our study on traffic sign classification and localization tasks.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/153373/1/keykholt_1.pd
    • …
    corecore