48,254 research outputs found

    Robust Machine Learning In Computer Vision

    Get PDF
    Deep neural networks have been shown to be successful in various computer vision tasks such as image classification and object detection. Although deep neural networks have exceeded human performance in many tasks, robustness and reliability are always the concerns of using deep learning models. On the one hand, degraded images and videos aggravate the performance of computer vision tasks. On the other hand, if the deep neural networks are under adversarial attacks, the networks can be broken completely. Motivated by the vulnerability of deep neural networks, I analyze and develop image restoration and adversarial defense algorithms towards a vision of robust machine learning in computer vision. In this dissertation, I study two types of degradation making deep neural networks vulnerable. The first part of the dissertation focuses on face recognition at long range, whose performance is severely degraded by atmospheric turbulence. The theme is on improving the performance and robustness of various tasks in face recognition systems such as facial keypoints localization, feature extraction, and image restoration. The second part focuses on defending adversarial attacks in the images classification task. The theme is on exploring adversarial defense methods that can achieve good performance in standard accuracy, robustness to adversarial attacks with known threat models, and good generalization to other unseen attacks

    Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid

    Full text link
    Deep neural networks have been widely adopted in recent years, exhibiting impressive performances in several application domains. It has however been shown that they can be fooled by adversarial examples, i.e., images altered by a barely-perceivable adversarial noise, carefully crafted to mislead classification. In this work, we aim to evaluate the extent to which robot-vision systems embodying deep-learning algorithms are vulnerable to adversarial examples, and propose a computationally efficient countermeasure to mitigate this threat, based on rejecting classification of anomalous inputs. We then provide a clearer understanding of the safety properties of deep networks through an intuitive empirical analysis, showing that the mapping learned by such networks essentially violates the smoothness assumption of learning algorithms. We finally discuss the main limitations of this work, including the creation of real-world adversarial examples, and sketch promising research directions.Comment: Accepted for publication at the ICCV 2017 Workshop on Vision in Practice on Autonomous Robots (ViPAR

    Unleashing the Power of Multi-Agent Deep Learning: Cyber-Attack Detection in IoT

    Get PDF
    Detecting botnet and malware cyber-attacks is a critical task in ensuring the security of computer networks. Traditional methods for identifying such attacks often involve static rules and signatures, which can be easily evaded by attackers. Dl is a subdivision of ML, has shown promise in enhancing the accuracy of detecting botnets and malware by analyzing large amounts of network traffic data and identifying patterns that are difficult to detect with traditional methods. In order to identify abnormal traffic patterns that can be a sign of botnet or malware activity, deep learning models can be taught to learn the intricate interactions and correlations between various network traffic parameters, such as packet size, time intervals, and protocol headers. The models can also be trained to detect anomalies in network traffic, which could indicate the presence of unknown malware. The threat of malware and botnet assaults has increased in frequency with the growth of the IoT. In this research, we offer a unique LSTM and GAN-based method for identifying such attacks. We utilise our model to categorise incoming traffic as either benign or malicious using a dataset of network traffic data from various IoT devices. Our findings show how well our method works by attaining high accuracy in identifying botnet and malware cyberattacks in IoT networks. This study makes a contribution to the creation of stronger and more effective security systems for shielding IoT devices from online dangers.  One of the major advantages of using deep learning for botnet and malware detection is its ability to adapt to new and previously unknown attack patterns, making it a useful tool in the fight against constantly evolving cyber threats. However, DL models require large quantity of labeled data for training, and their performance can be affected by the quality and quantity of the data used.  Deep learning holds great potential for improving the accuracy and effectiveness of botnet and malware detection, and its continued development and application could lead to significant advancements in the field of cybersecurity
    corecore