7 research outputs found
Recommended from our members
Do switches dream of machine learning?: Toward in-network classification
Machine learning is currently driving a technological and societal revolution. While programmable switches have been proven to be useful for in-network computing, machine learning within programmable switches had little success so far. Not using network devices for machine learning has a high toll, given the known power efficiency and performance benefits of processing within the network. In this paper, we explore the potential use of commodity programmable switches for in-network classification, by mapping trained machine learning models to match-action pipelines. We introduce IIsy, a software and hardware based prototype of our approach, and discuss the suitability of mapping to different targets. Our solution can be generalized to additional machine learning algorithms, using the methods presented in this work
A System for the Detection of Adversarial Attacks in Computer Vision via Performance Metrics
Adversarial attacks, or attacks committed by an adversary to hijack a system, are prevalent in the deep learning tasks of computer vision and are one of the greatest threats to these models\u27 safe and accurate use. These attacks force the trained model to misclassify an image, using pixel-level changes undetectable to the human eye. Various defenses against these attacks exist and are detailed in this work. The work of previous researchers has established that when adversarial attacks occur, different node patterns in a Deep Neural Network (DNN) are activated within the model. Additionally, it is known that CPU and GPU metrics look different when different computations are occurring. This work builds upon that knowledge to hypothesize that the system performance metrics, in the form of CPUs, GPUs, and throughput, will reflect the presence of adversarial input in a DNN. This experiment found that external measurements of system performance metrics did not reflect the presence of adversarial input. This work establishes the beginning stages of using system performance metrics to detect and defend against adversarial attacks. Using performance metrics to defend against adversarial attacks can increase the model\u27s safety, improving the robustness and trustworthiness of DNNs