2 research outputs found
CYGENT: A cybersecurity conversational agent with log summarization powered by GPT-3
In response to the escalating cyber-attacks in the modern IT and IoT
landscape, we developed CYGENT, a conversational agent framework powered by
GPT-3.5 turbo model, designed to aid system administrators in ensuring optimal
performance and uninterrupted resource availability. This study focuses on
fine-tuning GPT-3 models for cybersecurity tasks, including conversational AI
and generative AI tailored specifically for cybersecurity operations. CYGENT
assists users by providing cybersecurity information, analyzing and summarizing
uploaded log files, detecting specific events, and delivering essential
instructions. The conversational agent was developed based on the GPT-3.5 turbo
model. We fine-tuned and validated summarizer models (GPT3) using manually
generated data points. Using this approach, we achieved a BERTscore of over
97%, indicating GPT-3's enhanced capability in summarizing log files into
human-readable formats and providing necessary information to users.
Furthermore, we conducted a comparative analysis of GPT-3 models with other
Large Language Models (LLMs), including CodeT5-small, CodeT5-base, and
CodeT5-base-multi-sum, with the objective of analyzing log analysis techniques.
Our analysis consistently demonstrated that Davinci (GPT-3) model outperformed
all other LLMs, showcasing higher performance. These findings are crucial for
improving human comprehension of logs, particularly in light of the increasing
numbers of IoT devices. Additionally, our research suggests that the
CodeT5-base-multi-sum model exhibits comparable performance to Davinci to some
extent in summarizing logs, indicating its potential as an offline model for
this task.Comment: 7 pages, 9 figure
TSTEM: A Cognitive Platform for Collecting Cyber Threat Intelligence in the Wild
The extraction of cyber threat intelligence (CTI) from open sources is a
rapidly expanding defensive strategy that enhances the resilience of both
Information Technology (IT) and Operational Technology (OT) environments
against large-scale cyber-attacks. While previous research has focused on
improving individual components of the extraction process, the community lacks
open-source platforms for deploying streaming CTI data pipelines in the wild.
To address this gap, the study describes the implementation of an efficient and
well-performing platform capable of processing compute-intensive data pipelines
based on the cloud computing paradigm for real-time detection, collecting, and
sharing CTI from different online sources. We developed a prototype platform
(TSTEM), a containerized microservice architecture that uses Tweepy, Scrapy,
Terraform, ELK, Kafka, and MLOps to autonomously search, extract, and index
IOCs in the wild. Moreover, the provisioning, monitoring, and management of the
TSTEM platform are achieved through infrastructure as a code (IaC). Custom
focus crawlers collect web content, which is then processed by a first-level
classifier to identify potential indicators of compromise (IOCs). If deemed
relevant, the content advances to a second level of extraction for further
examination. Throughout this process, state-of-the-art NLP models are utilized
for classification and entity extraction, enhancing the overall IOC extraction
methodology. Our experimental results indicate that these models exhibit high
accuracy (exceeding 98%) in the classification and extraction tasks, achieving
this performance within a time frame of less than a minute. The effectiveness
of our system can be attributed to a finely-tuned IOC extraction method that
operates at multiple stages, ensuring precise identification of relevant
information with low false positives