1,869 research outputs found

    Robustness against adversarial attacks on deep neural networks

    Get PDF
    While deep neural networks have been successfully applied in several different domains, they exhibit vulnerabilities to artificially-crafted perturbations in data. Moreover, these perturbations have been shown to be transferable across different networks where the same perturbations can be transferred between different models. In response to this problem, many robust learning approaches have emerged. Adversarial training is regarded as a mainstream approach to enhance the robustness of deep neural networks with respect to norm-constrained perturbations. However, adversarial training requires a large number of perturbed examples (e.g., over 100,000 examples are required for MNIST dataset) trained on the deep neural networks before robustness can be considerably enhanced. This is problematic due to the large computational cost of obtaining attacks. Developing computationally effective approaches while retaining robustness against norm-constrained perturbations remains a challenge in the literature. In this research we present two novel robust training algorithms based on Monte-Carlo Tree Search (MCTS) [1] to enhance robustness under norm-constrained perturbations [2, 3]. The first algorithm searches potential candidates with Scale Invariant Feature Transform method and makes decisions with Monte-Carlo Tree Search method [2]. The second algorithm adopts Decision Tree Search method (DTS) to accelerate the search process while maintaining efficiency [3]. Our overarching objective is to provide computationally effective approaches that can be deployed to train deep neural networks robust against perturbations in data. We illustrate the robustness with these algorithms by studying the resistances to adversarial examples obtained in the context of the MNIST and CIFAR10 datasets. For MNIST, the results showed an average training efforts saving of 21.1\% when compared to Projected Gradient Descent (PGD) and 28.3\% when compared to Fast Gradient Sign Methods (FGSM). For CIFAR10, we obtained an average improvement of efficiency of 9.8\% compared to PGD and 13.8\% compared to FGSM. The results suggest that these two methods here introduced are not only robust to norm-constrained perturbations but also efficient during training. In regards to transferability of defences, our experiments [4] reveal that across different network architectures, across a variety of attack methods from white-box to black-box and across various datasets including MNIST and CIFAR10, our algorithms outperform other state-of-the-art methods, e.g., PGD and FGSM. Furthermore, the derived attacks and robust models obtained on our framework are reusable in the sense that the same norm-constrained perturbations can facilitate robust training across different networks. Lastly, we investigate the robustness of intra-technique and cross-technique transferability and the relations with different impact factors from adversarial strength to network capacity. The results suggest that known attacks on the resulting models are less transferable than those models trained by other state-of-the-art attack algorithms. Our results suggest that exploiting these tree search frameworks can result in significant improvements in the robustness of deep neural networks while saving computational cost on robust training. This paves the way for several future directions, both algorithmic and theoretical, as well as numerous applications to establish the robustness of deep neural networks with increasing trust and safety.Open Acces

    Suffolk Journal, Vol. 66, No. 4, 10/06/2005

    Get PDF
    https://dc.suffolk.edu/journal/1427/thumbnail.jp

    MEMENTO meMORIam: Reconciling Death, Society, and the Environment

    Get PDF
    The increase in population, rising cost of funeral expenses, and environmental “permanence” of cemeteries has increased the burden of modern American burial practices. In order to reconcile the various environmental, financial and psychological challenges of death, the architecture of death-related practices must propose sustainable alternatives of honoring and “housing” the dead. It must create a supportive environment that assists in the mourning experience and helps foster thoughts of remembrance, and goes beyond the physical function of a cemetery, or a crematorium, or a columbarium, by focusing on architecture\u27s subliminal nature to heal, orient, and evoke

    Domain-Specific Computing Architectures and Paradigms

    Full text link
    We live in an exciting era where artificial intelligence (AI) is fundamentally shifting the dynamics of industries and businesses around the world. AI algorithms such as deep learning (DL) have drastically advanced the state-of-the-art cognition and learning capabilities. However, the power of modern AI algorithms can only be enabled if the underlying domain-specific computing hardware can deliver orders of magnitude more performance and energy efficiency. This work focuses on this goal and explores three parts of the domain-specific computing acceleration problem; encapsulating specialized hardware and software architectures and paradigms that support the ever-growing processing demand of modern AI applications from the edge to the cloud. This first part of this work investigates the optimizations of a sparse spatio-temporal (ST) cognitive system-on-a-chip (SoC). This design extracts ST features from videos and leverages sparse inference and kernel compression to efficiently perform action classification and motion tracking. The second part of this work explores the significance of dataflows and reduction mechanisms for sparse deep neural network (DNN) acceleration. This design features a dynamic, look-ahead index matching unit in hardware to efficiently discover fine-grained parallelism, achieving high energy efficiency and low control complexity for a wide variety of DNN layers. Lastly, this work expands the scope to real-time machine learning (RTML) acceleration. A new high-level architecture modeling framework is proposed. Specifically, this framework consists of a set of high-performance RTML-specific architecture design templates, and a Python-based high-level modeling and compiler tool chain for efficient cross-stack architecture design and exploration.PHDElectrical and Computer EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/162870/1/lchingen_1.pd

    Spatial Cluster Analysis by the Adleman-Lipton DNA Computing Model and Flexible Grids

    Get PDF
    Spatial cluster analysis is an important data-mining task. Typical techniques include CLARANS, density- and gravity-based clustering, and other algorithms based on traditional von Neumann’s computing architecture. The purpose of this paper is to propose a technique for spatial cluster analysis based on DNA computing and a grid technique. We will adopt the Adleman-Lipton model and then design a flexible grid algorithm. Examples are given to show the effect of the algorithm. The new clustering technique provides an alternative for traditional cluster analysis

    Digital Goods and the New Economy

    Get PDF
    Digital goods are bitstrings, sequences of 0s and 1s, which have economic value. They are distinguished from other goods by five characteristics: digital goods are nonrival, infinitely expansible, discrete, aspatial, and recombinant. The New Economy is one where the economics of digital goods importantly influence aggregate economic performance. This Article considers such influences not by hypothesizing ad hoc inefficiencies that the New Economy can purport to resolve, but instead by beginning from an Arrow-Debreu perspective and asking how digital goods affect outcomes. This approach sheds light on why property rights on digital goods differ from property rights in general, guaranteeing neither appropriate incentives nor social efficiency; provides further insight into why Open Source Software is a successful model of innovation and development in digital goods industries; and helps explain how geographical clustering matters.aspatial, emergence, idea, information, innovation, intellectual asset, Internet, knowledge, Open Source, weightless economy

    Development of Assessment Strategies for Sign Retroreflectivity

    Get PDF
    The Manual on Uniform Traffic Control Devices (MUTCD) now specifies minimum retroreflectivity requirements. These requirements include an obligation for agencies to develop a strategy for maintaining compliance. With budget considerations, it is important that transportation agencies be able to efficiently assess the performance of their assets and adopt management strategies to comply with such requirements. As a foundational work, this research develops specific methodology for assessing the condition and performance of sign assets that are maintained by a large transportation agency. In doing so, this research provides for the determination of key elements that should be considered when developing any sign asset management strategy. This work incorporates and builds upon previous research in order to develop an assessment strategy that can provide new insight and understanding into where sign asset management efforts should be focused. Given the conditions unique to the Utah Department of Transportation’s (UDOT) sign assets, the findings of this research present a potential paradigm shift from the previous assumptions regarding the best prospective management practices. Sign damage was determined to be the primary issue affecting the nighttime visibility of UDOT maintained signs. By controlling damage issues within UDOT\u27s sign assets, retroreflectivity compliance may be maintained. The findings of this research provide for new options and considerations in managing both sign retroreflectivity and nighttime visibility at a large scale

    Spartan Daily, April 21, 1976

    Get PDF
    Volume 66, Issue 43https://scholarworks.sjsu.edu/spartandaily/6074/thumbnail.jp

    The Murray Ledger and Times, March 20, 2015

    Get PDF
    corecore