26 research outputs found

    Your Attack Is Too DUMB: Formalizing Attacker Scenarios for Adversarial Transferability

    Full text link
    Evasion attacks are a threat to machine learning models, where adversaries attempt to affect classifiers by injecting malicious samples. An alarming side-effect of evasion attacks is their ability to transfer among different models: this property is called transferability. Therefore, an attacker can produce adversarial samples on a custom model (surrogate) to conduct the attack on a victim's organization later. Although literature widely discusses how adversaries can transfer their attacks, their experimental settings are limited and far from reality. For instance, many experiments consider both attacker and defender sharing the same dataset, balance level (i.e., how the ground truth is distributed), and model architecture. In this work, we propose the DUMB attacker model. This framework allows analyzing if evasion attacks fail to transfer when the training conditions of surrogate and victim models differ. DUMB considers the following conditions: Dataset soUrces, Model architecture, and the Balance of the ground truth. We then propose a novel testbed to evaluate many state-of-the-art evasion attacks with DUMB; the testbed consists of three computer vision tasks with two distinct datasets each, four types of balance levels, and three model architectures. Our analysis, which generated 13K tests over 14 distinct attacks, led to numerous novel findings in the scope of transferable attacks with surrogate models. In particular, mismatches between attackers and victims in terms of dataset source, balance levels, and model architecture lead to non-negligible loss of attack performance.Comment: Accepted at RAID 202

    Going In Style: Audio Backdoors Through Stylistic Transformations

    Full text link
    This work explores stylistic triggers for backdoor attacks in the audio domain: dynamic transformations of malicious samples through guitar effects. We first formalize stylistic triggers - currently missing in the literature. Second, we explore how to develop stylistic triggers in the audio domain by proposing JingleBack. Our experiments confirm the effectiveness of the attack, achieving a 96% attack success rate. Our code is available in https://github.com/skoffas/going-in-style.Comment: Accepted to ICASSP '23 and the first two authors contributed equall

    You Can't Hide Behind Your Headset: User Profiling in Augmented and Virtual Reality

    Full text link
    Virtual and Augmented Reality (VR, AR) are increasingly gaining traction thanks to their technical advancement and the need for remote connections, recently accentuated by the pandemic. Remote surgery, telerobotics, and virtual offices are only some examples of their successes. As users interact with VR/AR, they generate extensive behavioral data usually leveraged for measuring human behavior. However, little is known about how this data can be used for other purposes. In this work, we demonstrate the feasibility of user profiling in two different use-cases of virtual technologies: AR everyday application (N=34N=34) and VR robot teleoperation (N=35N=35). Specifically, we leverage machine learning to identify users and infer their individual attributes (i.e., age, gender). By monitoring users' head, controller, and eye movements, we investigate the ease of profiling on several tasks (e.g., walking, looking, typing) under different mental loads. Our contribution gives significant insights into user profiling in virtual environments

    Your Attack Is Too DUMB: Formalizing Attacker Scenarios for Adversarial Transferability

    Get PDF
    peer reviewedEvasion attacks are a threat to machine learning models, where adversaries attempt to affect classifiers by injecting malicious samples. An alarming side-effect of evasion attacks is their ability to transfer among different models: this property is called transferability. Therefore, an attacker can produce adversarial samples on a custom model (surrogate) to conduct the attack on a victim’s organization later. Although literature widely discusses how adversaries can transfer their attacks, their experimental settings are limited and far from reality. For instance, many experiments consider both attacker and defender sharing the same dataset, balance level (i.e., how the ground truth is distributed), and model architecture. In this work, we propose the DUMB attacker model. This framework allows analyzing if evasion attacks fail to transfer when the training conditions of surrogate and victim models differ. DUMB considers the following conditions: Dataset soUrces, Model architecture, and the Balance of the ground truth. We then propose a novel testbed to evaluate many state-of-the-art evasion attacks with DUMB; the testbed consists of three computer vision tasks with two distinct datasets each, four types of balance levels, and three model architectures. Our analysis, which generated 13K tests over 14 distinct attacks, led to numerous novel findings in the scope of transferable attacks with surrogate models. In particular, mismatches between attackers and victims in terms of dataset source, balance levels, and model architecture lead to non-negligible loss of attack performance

    Targeted next-generation sequencing identification of mutations in disease resistance gene analogs (RGAs) in wild and cultivated beets

    Get PDF
    Resistance gene analogs (RGAs) were searched bioinformatically in the sugar beet (Beta vulgaris L.) genome as potential candidates for improving resistance against different diseases. In the present study, Ion Torrent sequencing technology was used to identify mutations in 21 RGAs. The DNA samples of ninety-six individuals from six sea beets (Beta vulgaris L. subsp. maritima) and six sugar beet pollinators (eight individuals each) were used for the discovery of single-nucleotide polymorphisms (SNPs). Target amplicons of about 200 bp in length were designed with the Ion AmpliSeq Designer system in order to cover the DNA sequences of the RGAs. The number of SNPs ranged from 0 in four individuals to 278 in the pollinator R740 (which is resistant to rhizomania infection). Among different groups of beets, cytoplasmic male sterile lines had the highest number of SNPs (132) whereas the lowest number of SNPs belonged to O-types (95). The principal coordinates analysis (PCoA) showed that the polymorphisms inside the gene Bv8_184910_pkon (including the CCCTCC sequence) can effectively differentiate wild from cultivated beets, pointing at a possible mutation associated to rhizomania resistance that originated directly from cultivated beets. This is unlike other resistance sources that are introgressed from wild beets. This gene belongs to the receptor-like kinase (RLK) class of RGAs, and is associated to a hypothetical protein. In conclusion, this first report of using Ion Torrent sequencing technology in beet germplasm suggests that the identified sequence CCCTCC can be used in marker-assisted programs to differentiate wild from domestic beets and to identify other unknown disease resistance genes in beet

    Achievement of the planetary defense investigations of the Double Asteroid Redirection Test (DART) mission

    Get PDF
    NASA's Double Asteroid Redirection Test (DART) mission was the first to demonstrate asteroid deflection, and the mission's Level 1 requirements guided its planetary defense investigations. Here, we summarize DART's achievement of those requirements. On 2022 September 26, the DART spacecraft impacted Dimorphos, the secondary member of the Didymos near-Earth asteroid binary system, demonstrating an autonomously navigated kinetic impact into an asteroid with limited prior knowledge for planetary defense. Months of subsequent Earth-based observations showed that the binary orbital period was changed by –33.24 minutes, with two independent analysis methods each reporting a 1σ uncertainty of 1.4 s. Dynamical models determined that the momentum enhancement factor, β, resulting from DART's kinetic impact test is between 2.4 and 4.9, depending on the mass of Dimorphos, which remains the largest source of uncertainty. Over five dozen telescopes across the globe and in space, along with the Light Italian CubeSat for Imaging of Asteroids, have contributed to DART's investigations. These combined investigations have addressed topics related to the ejecta, dynamics, impact event, and properties of both asteroids in the binary system. A year following DART's successful impact into Dimorphos, the mission has achieved its planetary defense requirements, although work to further understand DART's kinetic impact test and the Didymos system will continue. In particular, ESA's Hera mission is planned to perform extensive measurements in 2027 during its rendezvous with the Didymos–Dimorphos system, building on DART to advance our knowledge and continue the ongoing international collaboration for planetary defense

    Data-driven cybersecurity

    No full text
    A causa della continua crescita dei dati Internet, i professionisti della sicurezza informatica hanno sviluppato nuove difese basate sul Machine Learning (ML). Le soluzioni basate su ML offrono numerosi vantaggi, dalla loro capacità di apprendimento in grandi quantità di dati alla generalizzazione a dati sconosciuti. Questa tesi copre tre aspetti significativi derivati ​​dall'interazione tra machine learning e cybersecurity: (i) definizione di nuovi Network Intrusion Detection Systems (NIDS), (ii) cybersecurity per il monitoraggio dei contenuti web e (iii) Adversarial Machine Learning (AML). La prima parte della tesi presenta due temi in ambito NIDS: XeNIDS, con l'obiettivo di studiare e progettare cross-network NIDS, e DETONAR, un NIDS per reti IoT a bassa potenza. La seconda parte riguarda la sicurezza informatica per il monitoraggio dei contenuti web. In particolare, poiché gli utenti interagiscono nei forum e nei social network online (OSN), la loro attività potrebbe minacciare gli altri (ad esempio, incitamento all'odio). La tesi copre due temi: previsione dell'utilità delle recensioni, con l'obiettivo di prevedere se una recensione scritta in forum (e.g., Amazon, Yelp) sarà considerata utile da futuri utenti, e PRaNA, un'euristica che sfrutta il Photo Response Non-Uniformity (PRNU) dei video per individuare video genuini dalle loro versioni deepfake. La terza - e ultima - parte della dissertazione presenta due attacchi di evasione: ZeW, un attacco di evasione alle applicazioni di elaborazione del linguaggio naturale che sfrutta caratteri UNICODE invisibili, e CAPA, che discute esempi reali delle minacce create dagli utenti di OSN che hanno minato i moderatori automatici dei contenuti.Due to the continuous growth in Internet data, cybersecurity practitioners have developed new defenses based on Machine Learning (ML). ML-based solutions offer numerous benefits, from learning patterns among large amounts of data to generalizing to unknown data. This dissertation covers three significant aspects derived from the interaction between machine learning and cybersecurity: (i) definition of novel Network Intrusion Detection Systems (NIDS), (ii) cybersecurity for web content monitoring, and (iii) Adversarial Machine Learning (AML). The first part of the dissertation presents two NIDS themes: XeNIDS, aiming to study and design cross-networking NIDS, and DETONAR, a NIDS for low-powered IoT networks. The second part covers cybersecurity for web content monitoring. In particular, as users interact in forums and Online Social Networks (OSN), their activity might threaten others (e.g., hate speech). The dissertation covers two themes: helpful review prediction, aiming to forecast whether a review from forums (e.g., Amazon, Yelp) will be considered helpful, and PRaNA, a heuristic that leverages videos’ Photo Response Non-Uniformity (PRNU) to spot real videos from their deepfake versions. The third - and last - part of the dissertation presents two evasion attacks: ZeW, an evasion attack on Natural Language Processing applications that leverages invisible UNICODE characters, and CAPA, which discusses real examples of threats created by OSN’s users that undermined Automatic Content Moderators

    The Cross-evaluation of Machine Learning-based Network Intrusion Detection Systems

    No full text
    Enhancing Network Intrusion Detection Systems (NIDS) with supervised Machine Learning (ML) is tough. ML-NIDS must be trained and evaluated, operations requiring data where benign and malicious samples are clearly labeled. Such labels demand costly expert knowledge, resulting in a lack of real deployments, as well as on papers always relying on the same outdated data. The situation improved recently, as some efforts disclosed their labeled datasets. However, most past works used such datasets just as a 'yet another' testbed, overlooking the added potential provided by such availability. In contrast, we promote using such existing labeled data to cross-evaluate ML-NIDS. Such approach received only limited attention and, due to its complexity, requires a dedicated treatment. We hence propose the first cross-evaluation model. Our model highlights the broader range of realistic use-cases that can be assessed via cross-evaluations, allowing the discovery of still unknown qualities of state-of-the-art ML-NIDS. For instance, their detection surface can be extended - at no additional labeling cost. However, conducting such cross-evaluations is challenging. Hence, we propose the first framework, XeNIDS, for reliable cross-evaluations based on Network Flows. By using XeNIDS on six well-known datasets, we demonstrate the concealed potential, but also the risks, of cross-evaluations of ML-NIDS.Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Cyber Securit

    Fake News Spreaders Profiling Through Behavioural Analysis

    No full text
    The growth of social media and the people interconnection led to the digitalization of communication. Nowadays the most influential politicians or scientific communicators use the media to disseminate news or decisions. However, such communications media can be used maliciously to spread the so-called fake news in order to polarise public opinion or to deny scientific theories. It is therefore important to develop intelligent and accurate techniques in order to identify the spreading of fake news. In this paper, we describe the methodology regarding our participation in the PAN@ CLEF Profiling Fake News Spreaders on Twitter competition. We propose a supervised Machine-Learning (ML) based framework to profile fake-news spreaders. Our method relies on the combination of Big Five personality and stylometric features. Finally, we evaluate our framework detection capabilities and performance with different ML models on a tweeter dataset in both English and Spanish languages

    You Can’t Hide Behind Your Headset: User Profiling in Augmented and Virtual Reality

    No full text
    Augmented and Virtual Reality (AR and VR), collectively known as Extended Reality (XR), are increasingly gaining traction thanks to their technical advancement and the need for remote connections, recently accentuated by the pandemic. Remote surgery, telerobotics, and virtual offices are only some examples of their successes. As users interact with XR, they generate extensive behavioral data usually leveraged for measuring human activity, which could be used for profiling users’ identities or personal information (e.g., gender). However, several factors affect the efficiency of profiling, such as the technology employed, the action taken, the mental workload, the presence of bias, and the sensors available. To date, no study has considered all of these factors together and in their entirety, limiting the current understanding of XR profiling. In this work, we provide a comprehensive study on user profiling in virtual technologies (i.e., AR, VR). Specifically, we employ machine learning on behavioral data (i.e., head, controllers, and eye data) to identify users and infer their individual attributes (i.e., age, gender). Toward this end, we propose a general framework that can potentially infer any personal information from any virtual scenarios. We test our framework on eleven generic actions (e.g., walking, searching, pointing) involving low and high mental loads, derived from two distinct use cases: an AR everyday application (34 participants) and VR robot teleoperation (35 participants). Our framework limits the burden of creating technology- and action-dependent algorithms, also reducing the experimental bias evidenced in previous work, providing a simple (yet effective) baseline for future works. We identified users up to 97% F1-score in VR and 80% in AR. Gender and Age inference was also facilitated in VR, reaching up to 82% and 90% F1-score, respectively. Through an in-depth analysis of sensors’ impact, we found VR profiling resulting more effective than AR mainly because of the eye sensors’ presence
    corecore