3 research outputs found

    A System for the Detection of Adversarial Attacks in Computer Vision via Performance Metrics

    Get PDF
    Adversarial attacks, or attacks committed by an adversary to hijack a system, are prevalent in the deep learning tasks of computer vision and are one of the greatest threats to these models\u27 safe and accurate use. These attacks force the trained model to misclassify an image, using pixel-level changes undetectable to the human eye. Various defenses against these attacks exist and are detailed in this work. The work of previous researchers has established that when adversarial attacks occur, different node patterns in a Deep Neural Network (DNN) are activated within the model. Additionally, it is known that CPU and GPU metrics look different when different computations are occurring. This work builds upon that knowledge to hypothesize that the system performance metrics, in the form of CPUs, GPUs, and throughput, will reflect the presence of adversarial input in a DNN. This experiment found that external measurements of system performance metrics did not reflect the presence of adversarial input. This work establishes the beginning stages of using system performance metrics to detect and defend against adversarial attacks. Using performance metrics to defend against adversarial attacks can increase the model\u27s safety, improving the robustness and trustworthiness of DNNs

    Towards Specifying And Evaluating The Trustworthiness Of An AI-Enabled System

    Get PDF
    Applied AI has shown promise in the data processing of key industries and government agencies to extract actionable information used to make important strategical decisions. One of the core features of AI-enabled systems is the trustworthiness of these systems which has an important implication for the robustness and full acceptance of these systems. In this paper, we explain what trustworthiness in AI-enabled systems means, and the key technical challenges of specifying, and verifying trustworthiness. Toward solving these technical challenges, we propose a method to specify and evaluate the trustworthiness of AI-based systems using quality-attribute scenarios and design tactics. Using our trustworthiness scenarios and design tactics, we can analyze the architectural design of AI-enabled systems to ensure that trustworthiness has been properly expressed and achieved.The contributions of the thesis include (i) the identification of the trustworthiness sub-attributes that affect the trustworthiness of AI systems (ii) the proposal of trustworthiness scenarios to specify trustworthiness in an AI system (iii) a design checklist to support the analysis of the trustworthiness of AI systems and (iv) the identification of design tactics that can be used to achieve trustworthiness in an AI system
    corecore