248 research outputs found

    Bi-fidelity Evolutionary Multiobjective Search for Adversarially Robust Deep Neural Architectures

    Full text link
    Deep neural networks have been found vulnerable to adversarial attacks, thus raising potentially concerns in security-sensitive contexts. To address this problem, recent research has investigated the adversarial robustness of deep neural networks from the architectural point of view. However, searching for architectures of deep neural networks is computationally expensive, particularly when coupled with adversarial training process. To meet the above challenge, this paper proposes a bi-fidelity multiobjective neural architecture search approach. First, we formulate the NAS problem for enhancing adversarial robustness of deep neural networks into a multiobjective optimization problem. Specifically, in addition to a low-fidelity performance predictor as the first objective, we leverage an auxiliary-objective -- the value of which is the output of a surrogate model trained with high-fidelity evaluations. Secondly, we reduce the computational cost by combining three performance estimation methods, i.e., parameter sharing, low-fidelity evaluation, and surrogate-based predictor. The effectiveness of the proposed approach is confirmed by extensive experiments conducted on CIFAR-10, CIFAR-100 and SVHN datasets

    Intelligent Agents for Active Malware Analysis

    Get PDF
    The main contribution of this thesis is to give a novel perspective on Active Malware Analysis modeled as a decision making process between intelligent agents. We propose solutions aimed at extracting the behaviors of malware agents with advanced Artificial Intelligence techniques. In particular, we devise novel action selection strategies for the analyzer agents that allow to analyze malware by selecting sequences of triggering actions aimed at maximizing the information acquired. The goal is to create informative models representing the behaviors of the malware agents observed while interacting with them during the analysis process. Such models can then be used to effectively compare a malware against others and to correctly identify the malware famil

    Understanding, Assessing, and Mitigating Safety Risks in Artificial Intelligence Systems

    Get PDF
    Prepared for: Naval Air Warfare Development Center (NAVAIR)Traditional software safety techniques rely on validating software against a deductively defined specification of how the software should behave in particular situations. In the case of AI systems, specifications are often implicit or inductively defined. Data-driven methods are subject to sampling error since practical datasets cannot provide exhaustive coverage of all possible events in a real physical environment. Traditional software verification and validation approaches may not apply directly to these novel systems, complicating the operation of systems safety analysis (such as implemented in MIL-STD 882). However, AI offers advanced capabilities, and it is desirable to ensure the safety of systems that rely on these capabilities. When AI tech is deployed in a weapon system, robot, or planning system, unwanted events are possible. Several techniques can support the evaluation process for understanding the nature and likelihood of unwanted events in AI systems and making risk decisions on naval employment. This research considers the state of the art, evaluating which ones are most likely to be employable, usable, and correct. Techniques include software analysis, simulation environments, and mathematical determinations.Naval Air Warfare Development CenterNaval Postgraduate School, Naval Research Program (PE 0605853N/2098)Approved for public release. Distribution is unlimite

    Machine learning and blockchain technologies for cybersecurity in connected vehicles

    Get PDF
    Future connected and autonomous vehicles (CAVs) must be secured againstcyberattacks for their everyday functions on the road so that safety of passengersand vehicles can be ensured. This article presents a holistic review of cybersecurityattacks on sensors and threats regardingmulti-modal sensor fusion. A compre-hensive review of cyberattacks on intra-vehicle and inter-vehicle communicationsis presented afterward. Besides the analysis of conventional cybersecurity threatsand countermeasures for CAV systems,a detailed review of modern machinelearning, federated learning, and blockchain approach is also conducted to safe-guard CAVs. Machine learning and data mining-aided intrusion detection systemsand other countermeasures dealing with these challenges are elaborated at theend of the related section. In the last section, research challenges and future direc-tions are identified
    • …
    corecore