19 research outputs found

    3DVerifier: Efficient Robustness Verification for 3D Point Cloud Models

    Full text link
    3D point cloud models are widely applied in safety-critical scenes, which delivers an urgent need to obtain more solid proofs to verify the robustness of models. Existing verification method for point cloud model is time-expensive and computationally unattainable on large networks. Additionally, they cannot handle the complete PointNet model with joint alignment network (JANet) that contains multiplication layers, which effectively boosts the performance of 3D models. This motivates us to design a more efficient and general framework to verify various architectures of point cloud models. The key challenges in verifying the large-scale complete PointNet models are addressed as dealing with the cross-non-linearity operations in the multiplication layers and the high computational complexity of high-dimensional point cloud inputs and added layers. Thus, we propose an efficient verification framework, 3DVerifier, to tackle both challenges by adopting a linear relaxation function to bound the multiplication layer and combining forward and backward propagation to compute the certified bounds of the outputs of the point cloud models. Our comprehensive experiments demonstrate that 3DVerifier outperforms existing verification algorithms for 3D models in terms of both efficiency and accuracy. Notably, our approach achieves an orders-of-magnitude improvement in verification efficiency for the large network, and the obtained certified bounds are also significantly tighter than the state-of-the-art verifiers. We release our tool 3DVerifier via https://github.com/TrustAI/3DVerifier for use by the community

    Enhancing robustness in video recognition models : Sparse adversarial attacks and beyond

    Get PDF
    Recent years have witnessed increasing interest in adversarial attacks on images, while adversarial video attacks have seldom been explored. In this paper, we propose a sparse adversarial attack strategy on videos (DeepSAVA). Our model aims to add a small human-imperceptible perturbation to the key frame of the input video to fool the classifiers. To carry out an effective attack that mirrors real-world scenarios, our algorithm integrates spatial transformation perturbations into the frame. Instead of using the norm to gauge the disparity between the perturbed frame and the original frame, we employ the structural similarity index (SSIM), which has been established as a more suitable metric for quantifying image alterations resulting from spatial perturbations. We employ a unified optimisation framework to combine spatial transformation with additive perturbation, thereby attaining a more potent attack. We design an effective and novel optimisation scheme that alternatively utilises Bayesian Optimisation (BO) to identify the most critical frame in a video and stochastic gradient descent (SGD) based optimisation to produce both additive and spatial-transformed perturbations. Doing so enables DeepSAVA to perform a very sparse attack on videos for maintaining human imperceptibility while still achieving state-of-the-art performance in terms of both attack success rate and adversarial transferability. Furthermore, built upon the strong perturbations produced by DeepSAVA, we design a novel adversarial training framework to improve the robustness of video classification models. Our intensive experiments on various types of deep neural networks and video datasets confirm the superiority of DeepSAVA in terms of attacking performance and efficiency. When compared to the baseline techniques, DeepSAVA exhibits the highest level of performance in generating adversarial videos for three distinct video classifiers. Remarkably, it achieves an impressive fooling rate ranging from 99.5% to 100% for the I3D model, with the perturbation of just a single frame. Additionally, DeepSAVA demonstrates favorable transferability across various time series models. The proposed adversarial training strategy is also empirically demonstrated with better performance on training robust video classifiers compared with the state-of-the-art adversarial training with projected gradient descent (PGD) adversary

    Sparse Adversarial Video Attacks with Spatial Transformations

    Get PDF
    In recent years, a significant amount of research efforts concentrated on adversarial attacks on images, while adversarial video attacks have seldom been explored. We propose an adversarial attack strategy on videos, called DeepSAVA. Our model includes both additive perturbation and spatial transformation by a unified optimisation framework, where the structural similarity index (SSIM) measure is adopted to measure the adversarial distance. We design an effective and novel optimisation scheme which alternatively utilizes Bayesian optimisation to identify the most influential frame in a video and Stochastic gradient descent (SGD) based optimisation to produce both additive and spatial-transformed perturbations. Doing so enables DeepSAVA to perform a very sparse attack on videos for maintaining human imperceptibility while still achieving state-of-the-art performance in terms of both attack success rate and adversarial transferability. Our intensive experiments on various types of deep neural networks and video datasets confirm the superiority of DeepSAVA.Comment: The short version of this work will appear in the BMVC 2021 conferenc

    Assessment of the Robustness of Deep Neural Networks (DNNs)

    Get PDF
    In the past decade, Deep Neural Networks (DNNs) have demonstrated outstanding performance in various domains. However, recently, some researchers have shown that DNNs are surprisingly vulnerable to adversarial attacks. For instance, adding a small, human-imperceptible perturbation to an input image can fool DNNs, enabling the model to make an arbitrarily wrong prediction with high confidence. This raises serious concerns about the readiness of deep learning models, particularly in safety-critical applications, such as surveillance systems, autonomous vehicles, and medical applications. Hence, it is vital to investigate the performance of DNNs in an adversarial environment. In this thesis, we study the robustness of DNNs in three aspects: adversarial attacks, adversarial defence, and robustness verification. First, we address the robustness problems on video models and propose DeepSAVA, a sparse adversarial attack on video models. It aims to add human-imperceptible perturbations on the crucial frame of the input video to fool classifiers. Additionally, we construct a novel adversarial training framework based on the perturbations generated by DeepSAVA to increase the robustness of video classification models. The results show that DeepSAVA runs a relatively sparse attack on video models, yet achieves state-of-the-art performance in terms of attack success rate and adversarial transferability. Next, we address the challenges of robustness verification in two deep learning models: 3D point cloud models and cooperative multi-agent reinforcement learning models (c-MARLs). Robustness verification aims to provide solid proof of robustness within an input space to any adversarial attacks. To verify the robustness of 3D point cloud models, we propose an efficient verification framework, 3DVerifier, which tackles the challenges of cross-non-linearity operations in multiplication layers and the high computational complexity of high-dimensional point cloud inputs. We use a linear relaxation function to bound the multiplication layer and combine forward and backward propagation to compute the certified bounds of the outputs of the point cloud models. For certifying the c-MARLs, we propose a novel certification method, which is the first work to leverage a scalable approach for c-MARLs to determine actions with guaranteed certified bounds. The challenges of c-MARL certification are accumulated uncertainty as the number of agents increases and the potential lack of impact when changing the action of a single agent into a global team reward. These challenges prevent me from using existing algorithms directly. We employ the false discovery rate (FDR) controlling procedure, considering the importance of each agent to certify per-state robustness and propose a tree-search-based algorithm to find a lower bound of the global reward under the minimal certified perturbation. The experimental results show that the obtained certification bounds are much tighter than those of state-of-the-art RL certification solutions. In summary, this thesis focuses on assessing the robustness of deep learning models that are widely applied in safety-critical systems but rarely studied by the community. This thesis not only investigates the motivation and challenges of assessing the robustness of these deep learning models but also proposes novel and effective approaches to tackle these challenges

    The Analysis of Stress Waves at a Junction of Beam and String

    No full text
    In the bridge engineering, there are some problems about the dynamics that traditional theory cannot solve. So, the theory about stress waves is introduced to solve the related problems. This is a new attempt that the mechanic theory is applied to practical engineering. The stress wave at a junction of the structure composed of beams and strings is investigated in this paper. The structure is studied because the existence of a soft rope makes the transmission of the force in the bridge structure different from the traditional theory, and it is the basis for further research. The equilibrium equations of the displacement and the internal force are built based on the hypothesis. The fast Fourier transform (FFT) numerical algorithm is used to express an incident pulse of arbitrary shape. The analytical solutions are substantiated by comparing with the finite element programs. The conclusion that if the cross section of the string is relatively small, then the energy density of the structure is relatively large, which is disadvantageous to the structure, can be obtained from this paper

    A New Strategy of Transformer Oil Chromatogram Alarm

    No full text
    Abstract-Aiming at the fault early warning of transformer oil chromatographic commonly used the methods in the national standard. In this paper, a method is proposed to obtain alarm thresholds of each component gas within transformer oil chromatogram based on statistical analysis of numerous data. Compared with the value specified in national standard, the threshold obtained there is relatively conservative and can effectively prevent the potential failure from developing. Followed by that, on the basis of determining each gas threshold, through Pearson correlation analysis, a combination alarm strategy with decisionsupporting gas group is proposed. Such method can effectively avoid false alarm due to chromatography measurement error caused by equipment, human and environment factors, so that the alarm accuracy can be improved. The application of the theoretical results in this paper will provide a refined fault early warning method of transformer oil chromatogram to the substation operation and maintenance staffs

    3DVerifier: efficient robustness verification for 3D point cloud models

    No full text
    3D point cloud models are widely applied in safety-critical scenes, which delivers an urgent need to obtain more solid proofs to verify the robustness of models. Existing verifcation method for point cloud model is time-expensive and computationally unattainable on large networks. Additionally, they cannot handle the complete PointNet model with joint alignment network that contains multiplication layers, which efectively boosts the performance of 3D models. This motivates us to design a more efcient and general framework to verify various architectures of point cloud models. The key challenges in verifying the large-scale complete PointNet models are addressed as dealing with the cross-non-linearity operations in the multiplication layers and the high computational complexity of high-dimensional point cloud inputs and added layers. Thus, we propose an efcient verifcation framework, 3DVerifer, to tackle both challenges by adopting a linear relaxation function to bound the multiplication layer and combining forward and backward propagation to compute the certifed bounds of the outputs of the point cloud models. Our comprehensive experiments demonstrate that 3DVerifer outperforms existing verifcation algorithms for 3D models in terms of both efciency and accuracy. Notably, our approach achieves an orders-of-magnitude improvement in verifcation efciency for the large network, and the obtained certifed bounds are also signifcantly tighter than the state-of-the-art verifers. We release our tool 3DVerifer via https://github.com/TrustAI/3DVerifer for use by the community

    Certified Policy Smoothing for Cooperative Multi-Agent Reinforcement Learning

    Get PDF
    Cooperative multi-agent reinforcement learning (c-MARL) is widely applied in safety-critical scenarios, thus the analysis of robustness for c-MARL models is profoundly important. However, robustness certification for c-MARLs has not yet been explored in the community. In this paper, we propose a novel certification method, which is the first work to leverage a scalable approach for c-MARLs to determine actions with guaranteed certified bounds. c-MARL certification poses two key challenges compared to single-agent systems: (i) the accumulated uncertainty as the number of agents increases; (ii) the potential lack of impact when changing the action of a single agent into a global team reward. These challenges prevent us from directly using existing algorithms. Hence, we employ the false discovery rate (FDR) controlling procedure considering the importance of each agent to certify per-state robustness. We further propose a tree-search-based algorithm to find a lower bound of the global reward under the minimal certified perturbation. As our method is general, it can also be applied in a single-agent environment. We empirically show that our certification bounds are much tighter than those of state-of-the-art RL certification solutions. We also evaluate our method on two popular c-MARL algorithms: QMIX and VDN, under two different environments, with two and four agents. The experimental results show that our method can certify the robustness of all c-MARL models in various environments. Our tool CertifyCMARL is available at https://github.com/TrustAI/CertifyCMARL

    Estrogen protects against liver damage in sepsis through inhibiting oxidative stress mediated activation of pyroptosis signaling pathway.

    No full text
    Sepsis was characterized by systemic inflammatory response and multisystem organ dysfunction, refering to the activation of inflammatory and oxidative stress pathways. Estrogen has been shown to have anti-inflammatory and antioxidant effects as well as extensive organ protective role. However, whether estrogen alleviates sepsis-induced liver injury and the mechanisms involved remain unknown. Septic mice were constructed by intraperitoneal injection lipopolysaccharide, and the effect of estrogen on liver injury was investigated. Furthermore, the roles of NLRP3 inhibitor MCC950 and mitochondrial ROS specific scavenger Mito-tempo, on the liver injury were explored in septic mice. Female septic mice exhibited liver damage with increased serum AST and ALT level as well as the existence of extensive necrosis, and which was more serious in male septic mice. Moreover, Ovariectomy (OVX) aggravated sepsis-induced liver damage and activation of pyroptosis signaling pathway, which was alleviated by estrogen as evidenced by decreased serum AST, ALT level and number of infiltrating inflammatory cell, as well as protein expression related to pyroptosis. OVX aggravated mitochondrial dysfunction and liver injury in septic mice was also partly reversed by Mito-tempo and MCC950. These results demonstrated that estrogen protected against sepsis-induced liver damage through alteration of mitochondrial function and activation of inflammatory-mediated pyroptosis signaling pathway

    Observable Electrochemical Oxidation of Carbon Promoted by Platinum Nanoparticles

    No full text
    The radical degradation of Pt-based catalysts toward oxygen reduction reaction (ORR), predominantly caused by the oxidation of carbon supports, heavily blocks the commercialization of polymer electrolyte membrane fuel cells (PEMFCs). As reported, the electrochemical oxidation of carbon could be accelerated by Pt catalysts; however, hitherto no direct evidence is present for the promotion of Pt catalysts. Herein, a unique ultrathin carbon layer (approximately 2.9 nm in thickness) covered Pt catalyst (Pt/C-GC) is designed and synthesized by a chemical vapor deposition (CVD) method. This magnifies the catalysis effect of Pt to carbon oxidation due to the greatly increased contact sites between the metal–support, making it easy to investigate the carbon oxidation process by observing the thinning of the carbon layer on Pt nanoparticles from TEM observations. Undoubtedly, this finding can better guide the structural design of the durable metal catalysts for PEMFCs and other applications
    corecore