3 research outputs found
A Privacy Evaluation of Nyx
For this project, I will be analyzing the privacy leakage in a certain DDoS mitigation system. Nyx has been shown both in simulation and over live internet traffic to mitigate the effects of DDoS without any cooperation from downstream ASes and without any modifications to current routing protocols. However it does this through BPG-poisoning, which can unintentionally advertise information. This project explores what the traffic from Nyx looks like and what information can be gathered from it. Specifically, Nyx works by defining a deployer/critical relationship whose traffic is moved to maintain even under DDoS circumstances, and I will be evaluating how often that relationship can be discovered.
This project will analyze the privacy leakage in the Nyx DDoS mitigation system. Nyx\u27s effectiveness in rerouting critical traffic around congestion has been demonstrated both in simulation and in practice. Importantly, Nyx functions without cooperation from downstream ASes or modifications to current routing protocols. However, Nyx achieves routing based DDoS mitigation through BGP poisoning, which can unintentionally advertise information. This project will analyze Nyx\u27s BPG advertisements to evaluate its privacy implications. Specifically, this work studies whether an adversary can determine the critical relationship that the AS deploying Nyx has defined. We find that in the authors initial naive approach, finding this relationship is essentially trivial and an adversary can narrow down the critical relationship to a maximum of 4 out of 9,767 autonomous systems in the active internet topology. In their more complex approach found in we find that the critical relationship is more difficult to determine with significant accuracy, with our anonymity sets ranging from 3 to 7,788. This project then explores why that range is so large in an attempt to highlight how Nyx could become more privacy focused
Testing SOAR Tools in Use
Modern security operation centers (SOCs) rely on operators and a tapestry of
logging and alerting tools with large scale collection and query abilities. SOC
investigations are tedious as they rely on manual efforts to query diverse data
sources, overlay related logs, and correlate the data into information and then
document results in a ticketing system. Security orchestration, automation, and
response (SOAR) tools are a new technology that promise to collect, filter, and
display needed data; automate common tasks that require SOC analysts' time;
facilitate SOC collaboration; and, improve both efficiency and consistency of
SOCs. SOAR tools have never been tested in practice to evaluate their effect
and understand them in use. In this paper, we design and administer the first
hands-on user study of SOAR tools, involving 24 participants and 6 commercial
SOAR tools. Our contributions include the experimental design, itemizing six
characteristics of SOAR tools and a methodology for testing them. We describe
configuration of the test environment in a cyber range, including network,
user, and threat emulation; a full SOC tool suite; and creation of artifacts
allowing multiple representative investigation scenarios to permit testing. We
present the first research results on SOAR tools. We found that SOAR
configuration is critical, as it involves creative design for data display and
automation. We found that SOAR tools increased efficiency and reduced context
switching during investigations, although ticket accuracy and completeness
(indicating investigation quality) decreased with SOAR use. Our findings
indicated that user preferences are slightly negatively correlated with their
performance with the tool; overautomation was a concern of senior analysts, and
SOAR tools that balanced automation with assisting a user to make decisions
were preferred
AI ATAC 1: An Evaluation of Prominent Commercial Malware Detectors
This work presents an evaluation of six prominent commercial endpoint malware
detectors, a network malware detector, and a file-conviction algorithm from a
cyber technology vendor. The evaluation was administered as the first of the
Artificial Intelligence Applications to Autonomous Cybersecurity (AI ATAC)
prize challenges, funded by / completed in service of the US Navy. The
experiment employed 100K files (50/50% benign/malicious) with a stratified
distribution of file types, including ~1K zero-day program executables
(increasing experiment size two orders of magnitude over previous work). We
present an evaluation process of delivering a file to a fresh virtual machine
donning the detection technology, waiting 90s to allow static detection, then
executing the file and waiting another period for dynamic detection; this
allows greater fidelity in the observational data than previous experiments, in
particular, resource and time-to-detection statistics. To execute all 800K
trials (100K files 8 tools), a software framework is designed to
choreographed the experiment into a completely automated, time-synced, and
reproducible workflow with substantial parallelization. A cost-benefit model
was configured to integrate the tools' recall, precision, time to detection,
and resource requirements into a single comparable quantity by simulating costs
of use. This provides a ranking methodology for cyber competitions and a lens
through which to reason about the varied statistical viewpoints of the results.
These statistical and cost-model results provide insights on state of
commercial malware detection