23 research outputs found

    Fear Appeals and Information Security Behaviors: An Empirical Study on Mechanical Turk

    Get PDF
    This study aims to conduct a methodological replication of the information security study conducted by Johnston and Warkentin (2010). This study leveraged the use of the fear appeals model (FAM) in the context of information security as they pertain to the individual use of anti-spyware software. We adopt all measures, instruments, statistical tests, theory, and models from the original study, but apply them to the Amazon Mechanical Turk population. The results from this replication study are not consistent with the original study, in that two of the five posited hypotheses have opposite effects than those originally found; threat severity is shown to have a positive effect on both response efficacy and self-efficacy, where in the original study, this is shown to have a negative effect on both. The results imply that there may be differences in which populations the study was conducted, thus requiring additional samples and statistical tests

    Suggesting Alternatives for Potentially Insecure Artificial Intelligence Repositories: An Unsupervised Graph Embedding Approach

    Get PDF
    Emerging Artificial Intelligence (AI) applications are bringing with them both the potential for significant societal benefit and harm. Additionally, vulnerabilities within AI source code can make them susceptible to attacks ranging from stealing private data to stealing trained model parameters. Recently, with the adoption of open-source software (OSS) practices, the AI development community has introduced the potential to worsen the number of vulnerabilities present in emerging AI applications, building new applications on top of previous applications, naturally inheriting any vulnerabilities. With the AI OSS community growing rapidly to a scale that requires automated means of analysis for vulnerability management, we compare three categories of unsupervised graph embedding methods capable of generating repository embeddings that can be used to rank existing applications based on their functional similarity for AI developers. The resulting embeddings can be used to suggest alternatives to AI developers for potentially insecure AI repositories

    On data-driven curation, learning, and analysis for inferring evolving internet-of-Things (IoT) botnets in the wild

    Get PDF
    © 2020 Elsevier Ltd The insecurity of the Internet-of-Things (IoT) paradigm continues to wreak havoc in consumer and critical infrastructures. The highly heterogeneous nature of IoT devices and their widespread deployments has led to the rise of several key security and measurement-based challenges, significantly crippling the process of collecting, analyzing and correlating IoT-centric data. To this end, this paper explores macroscopic, passive empirical data to shed light on this evolving threat phenomena. The proposed work aims to classify and infer Internet-scale compromised IoT devices by solely observing one-way network traffic, while also uncovering, reporting and thoroughly analyzing “in the wild” IoT botnets. To prepare a relevant dataset, a novel probabilistic model is developed to cleanse unrelated traffic by removing noise samples (i.e., misconfigured network traffic). Subsequently, several shallow and deep learning models are evaluated in an effort to train an effective multi-window convolutional neural network. By leveraging active and passing measurements when generating the training dataset, the neural network aims to accurately identify compromised IoT devices. Consequently, to infer orchestrated and unsolicited activities that have been generated by well-coordinated IoT botnets, hierarchical agglomerative clustering is employed by scrutinizing a set of innovative and efficient network feature sets. Analyzing 3.6 TB of recently captured darknet traffic revealed a momentous 440,000 compromised IoT devices and generated evidence-based artifacts related to 350 IoT botnets. Moreover, by conducting thorough analysis of such inferred campaigns, we reveal their scanning behaviors, packet inter-arrival times, employed rates and geo-distributions. Although several campaigns exhibit significant differences in these aspects, some are more distinguishable; by being limited to specific geo-locations or by executing scans on random ports besides their core targets. While many of the inferred botnets belong to previously documented campaigns such as Hide and Seek, Hajime and Fbot, newly discovered events portray the evolving nature of such IoT threat phenomena by demonstrating growing cryptojacking capabilities or by targeting industrial control services. To motivate empirical (and operational) IoT cyber security initiatives as well as aid in reproducibility of the obtained results, we make the source codes of all the developed methods and techniques available to the research community at large

    Trusted CI webinar: Identifying Vulnerable GitHub Repositories in Scientific Cyberinfrastructure: An Artificial Intelligence Approach

    No full text
    The scientific cyberinfrastructure community heavily relies on public internet-based systems (e.g., GitHub) to share resources and collaborate. GitHub is one of the most powerful and popular systems for open source collaboration that allows users to share and work on projects in a public space for accelerated development and deployment. Monitoring GitHub for exposed vulnerabilities can save financial cost and prevent misuse and attacks of cyberinfrastructure. Vulnerability scanners that can interface with GitHub directly can be leveraged to conduct such monitoring. This research aims to proactively identify vulnerable communities within scientific cyberinfrastructure. We use social network analysis to construct graphs representing the relationships amongst users and repositories. We leverage prevailing unsupervised graph embedding algorithms to generate graph embeddings that capture the network attributes and nodal features of our repository and user graphs. This enables the clustering of public cyberinfrastructure repositories and users that have similar network attributes and vulnerabilities. Results of this research find that major scientific cyberinfrastructures have vulnerabilities pertaining to secret leakage and insecure coding practices for high-impact genomics research. These results can help organizations address their vulnerable repositories and users in a targeted manner. Speaker Bio: Dr. Sagar Samtani is an Assistant Professor and Grant Thornton Scholar in the Department of Operations and Decision Technologies at the Kelley School of Business at Indiana University (2020 – Present). He is also a Fellow within the Center for Applied Cybersecurity Research (CACR) at IU. Samtani graduated with his Ph.D. in May 2018 from the Artificial Intelligence Lab in University of Arizona’s Management Information Systems (MIS) department from the University of Arizona (UArizona). He also earned his MS in MIS and BSBA in 2014 and 2013, respectively, from UArizona. From 2014 – 2017, Samtani served as a National Science Foundation (NSF) Scholarship-for-Service (SFS) Fellow. Samtani’s research centers around Explainable Artificial Intelligence (XAI) for Cybersecurity and cyber threat intelligence (CTI). Selected recent topics include deep learning, network science, and text mining approaches for smart vulnerability assessment, scientific cyberinfrastructure security, and Dark Web analytics. Samtani has published over two dozen journal and conference papers on these topics in leading venues such as MIS Quarterly, JMIS, ACM TOPS, IEEE IS, Computers and Security, IEEE Security and Privacy, and others. His research has received nearly $1.8M (in PI and Co-PI roles) from the NSF CICI, CRII, and SaTC-EDU programs. He also serves as a Program Committee member or Program Chair of leading AI for cybersecurity and CTI conferences and workshops, including IEEE S&P Deep Learning Workshop, USENIX ScAINet, ACM CCS AISec, IEEE ISI, IEEE ICDM, and others. He has also served as a Guest Editor on topics pertaining to AI for Cybersecurity at IEEE TDSC and other leading journals. Samtani has won several awards for his research and teaching efforts, including the ACM SIGMIS Doctoral Dissertation award in 2019. Samtani has received media attention from outlets such as Miami Herald, Fox, Science Magazine, AAAS, and the Penny Hoarder. He is a member of AIS, ACM, IEEE, INFORMS, and INNS.NSF Grant # 1920430Ope

    Improving the Adversarial Robustness of Machine Learning-based Phishing Website Detectors: An Autoencoder-based Auxiliary Approach

    No full text
    Anti-phishing research relies on collaboration between defensive and offensive efforts. The defensive side develops machine learning-based phishing website detectors to protect users from phishing attacks. However, adversaries can manipulate detectable phishing websites into evasive ones as adversarial examples, misleading detectors into classifying them as legitimate. Therefore, offensive efforts are vital to examine the threats posed by adversaries and inform the defensive side to improve the adversarial robustness of detectors. Prevailing approaches to improve adversarial robustness may compromise a detector’s original high performance on clean data (nonadversarial websites) as it becomes more accurate at detecting adversarial examples. To address this, we propose a novel approach using a Graph Convolutional Autoencoder as an auxiliary model to make collaborative decisions with the original detector in distinguishing evasive phishing websites from legitimate ones. We evaluate our approach by enhancing a CNN-based detector against adversarial attacks. Our approach achieves high adversarial robustness while maintaining high performance on clean data compared to retraining and fine-tuning benchmarks

    Linking Exploits from the Dark Web to Known Vulnerabilities for Proactive Cyber Threat Intelligence: An Attention-Based Deep Structured Semantic Model

    No full text
    Black hat hackers use malicious exploits to circumvent security controls and take advantage of system vulnerabilities worldwide, costing the global economy over $450 billion annually. While many organizations are increasingly turning to cyber threat intelligence (CTI) to help prioritize their vulnerabilities, extant CTI processes are often criticized as being reactive to known exploits. One promising data source that can help develop proactive CTI is the vast and ever-evolving Dark Web. In this study, we adopted the computational design science paradigm to design a novel deep learning (DL)- based exploit-vulnerability attention deep structured semantic model (EVA-DSSM) that includes bidirectional processing and attention mechanisms to automatically link exploits from the Dark Web to vulnerabilities. We also devised a novel device vulnerability severity metric (DVSM) that incorporates the exploit post date and vulnerability severity to help cybersecurity professionals with their device prioritization and risk management efforts. We rigorously evaluated the EVA-DSSM against state-of-theart non-DL and DL-based methods for short text matching on 52,590 exploit-vulnerability linkages across four testbeds: web application, remote, local, and denial of service. Results of these evaluations indicate that the proposed EVA-DSSM achieves precision at 1 scores 20% - 41% higher than non-DL approaches and 4% - 10% higher than DL-based approaches. We demonstrated the EVA-DSSM’s and DVSM’s practical utility with two CTI case studies: openly accessible systems in the top eight U.S. hospitals and over 20,000 Supervisory Control and Data Acquisition (SCADA) systems worldwide. A complementary user evaluation of the case study results indicated that 45 cybersecurity professionals found the EVADSSM and DVSM results more useful for exploit-vulnerability linking and risk prioritization activities than those produced by prevailing approaches. Given the rising cost of cyberattacks, the EVA-DSSM and DVSM have important implications for analysts in security operations centers, incident response teams, and cybersecurity vendors

    ROCKET SHIP OR BLIMP? – IMPLICATIONS OF MALICIOUS ACCOUNTS REMOVAL ON TWITTER

    No full text
    In this study we investigate how the removal of malicious accounts that follow legitimate accounts owned by popular people impacts the popularity of tweets posted by celebrities and politicians. Using retweet counts, we analyze to what extent malicious accounts contribute to amplification of tweets across the network. We organize tweets into three broad categories (Rocket Ship, Jet or Blimp) and investigate how the distribution of tweets is influenced by a cleanup of malicious accounts. To understand how the suspension of malicious accounts impacts the propagation of messages on Twitter, we conduct a descriptive statistical analysis of retweets of a total of 464 Donald Trump tweets. We find a statistically significant difference in the mean count of retweets and favorites before and after the malicious account removal. Preliminary results of our analysis show that the implications of Twitter’s cleanup initiatives, which targeted malicious accounts, are visible in the narrowing amplitudes of retweet values. However, the distribution of tweet categories based on the number of retweets remains unchanged
    corecore