1,171 research outputs found
Hardware-based capture-the-flag challenges
In a world where cybersecurity is becoming increasingly important and where the lack of workforce is estimated in terms of millions of people, gamification is getting a more and more significant role in leading to excellent results in terms of both training and recruitment.
Within cybersecurity gamification, the so-called Capture-The-Flag (CTF) challenges are definitely the corner stones, as proved by the high number of events, competitions, and training courses that rely on them. In these events, the participants are confronted directly with games and riddles related to practical problems of hacking, cyber-attack, and cyber-defense.
Although hardware security and hardware-based security already play a key role in the cybersecurity arena, in the worldwide panorama of CTF events hardware-based challenges are unfortunately still very marginal.
In the present paper, we focus on hardware-based challenges, providing first a formal definition and then proposing, for the first time, a comprehensive taxonomy. We eventually share experiences gathered in preparing and delivering several hardware-based challenges in significant events and training courses that involved hundreds of attendees
ZenHackAdemy: Ethical Hacking @ DIBRIS
Cybersecurity attacks are on the rise, and the current response is not effective enough. The need for a competent workforce, able to face attackers, is increasing. At the moment, the gap between academia and real-world skills is huge and academia cannot provide students with skills that match those of an attacker. To pass on these skills, teachers have to train students in scenarios as close as possible to real-world ones. Capture the Flag (CTF) competitions are a great tool to achieve this goal, since they encourage students to think as an attacker does, thus creating more awareness on the modalities and consequences of an attack. We describe our experience in running an educational activity on ethical hacking, which we proposed to computer science and computer engineering students. We organized seminars, outside formal classes, and provided online support on the hands-on part of the training. We delivered different types of exercises and held a final CTF competition. These activities resulted in growing a community of students and researchers interested in cybersecurity, and some of them have formed ZenHack, an official CTF team
Crowdfunding Non-fungible Tokens on the Blockchain
Non-fungible tokens (NFTs) have been used as a way of rewarding content creators. Artists publish their works on the blockchain as NFTs, which they can then sell. The buyer of an NFT then holds ownership of a unique digital asset, which can be resold in much the same way that real-world art collectors might trade paintings. However, while a deal of effort has been spent on selling works of art on the blockchain, very little attention has been paid to using the blockchain as a means of fundraising to help finance the artist’s work in the first place. Additionally, while blockchains like Ethereum are ideal for smaller works of art, additional support is needed when the artwork is larger than is feasible to store on the blockchain. In this paper, we propose a fundraising mechanism that will help artists to gain financial support for their initiatives, and where the backers can receive a share of the profits in exchange for their support. We discuss our prototype implementation using the SpartanGold framework. We then discuss how this system could be expanded to support large NFTs with the 0Chain blockchain, and describe how we could provide support for ongoing storage of these NFTs
Fake Malware Generation Using HMM and GAN
In the past decade, the number of malware attacks have grown considerably and, more importantly, evolved. Many researchers have successfully integrated state-of-the-art machine learning techniques to combat this ever present and rising threat to information security. However, the lack of enough data to appropriately train these machine learning models is one big challenge that is still present. Generative modelling has proven to be very efficient at generating image-like synthesized data that can match the actual data distribution. In this paper, we aim to generate malware samples as opcode sequences and attempt to differentiate them from the real ones with the goal to build fake malware data that can be used to effectively train the machine learning models. We use and compare different Generative Adversarial Networks (GAN) algorithms and Hidden Markov Models (HMM) to generate such fake samples obtaining promising results
Hacker Combat: A Competitive Sport from Programmatic Dueling & Cyberwarfare
The history of humanhood has included competitive activities of many
different forms. Sports have offered many benefits beyond that of
entertainment. At the time of this article, there exists not a competitive
ecosystem for cyber security beyond that of conventional capture the flag
competitions, and the like. This paper introduces a competitive framework with
a foundation on computer science, and hacking. This proposed competitive
landscape encompasses the ideas underlying information security, software
engineering, and cyber warfare. We also demonstrate the opportunity to rank,
score, & categorize actionable skill levels into tiers of capability.
Physiological metrics are analyzed from participants during gameplay. These
analyses provide support regarding the intricacies required for competitive
play, and analysis of play. We use these intricacies to build a case for an
organized competitive ecosystem. Using previous player behavior from gameplay,
we also demonstrate the generation of an artificial agent purposed with
gameplay at a competitive level
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create two interactive code environments with Bash and SQL as
action spaces, leveraging data from the static Spider and NL2Bash datasets. We
demonstrate InterCode's viability as a testbed by evaluating multiple
state-of-the-art LLMs configured with different prompting strategies such as
ReAct and Plan & Solve. Our results showcase the benefits of interactive code
generation and demonstrate that InterCode can serve as a challenging benchmark
for advancing code understanding and generation capabilities. InterCode is
designed to be easily extensible and can even be used to incorporate new tasks
such as Capture the Flag, a popular coding puzzle that is inherently multi-step
and involves multiple programming languages. Project site with code and data:
https://intercode-benchmark.github.ioComment: Project site with code and data:
https://intercode-benchmark.github.i
The Need for Speed: A Fast Guessing Entropy Calculation for Deep Learning-based SCA
The adoption of deep neural networks for profiling side-channel attacks (SCA) opened new perspectives for leakage detection. Recent publications showed that cryptographic implementations featuring different countermeasures could be broken without feature selection or trace preprocessing. This success comes with a high price: extensive hyperparameter search to find optimal deep learning models.
As deep learning models usually suffer from overfitting due to their high fitting capacity, it is crucial to avoid over-training regimes, which require a correct number of epochs. For that, \textit{early stopping} is employed as an efficient regularization method that requires a consistent validation metric. Although guessing entropy is a highly informative metric for profiling SCA, it is time-consuming, especially if computed for all epochs during training and the number of validation traces is significantly large.
This paper shows that guessing entropy can be efficiently computed during training by reducing the number of validation traces without affecting the efficiency of early stopping decisions. Our solution significantly speeds up the process, impacting hyperparameter search and overall profiling attack performances. Our fast guessing entropy calculation is up to 16 faster, resulting in more hyperparameter tuning experiments and allowing security evaluators to find more efficient deep learning model
Swarming the SC’17 Student Cluster Competition
The Student Cluster Competition is a suite of challenges
where teams of undergraduates design a computer cluster and then
compete against each other through various benchmark applications.
The present study will provide a select summary of the experiences of
Team Swarm who represented the Georgia Institute of Technology
at the SC’17 Student Cluster Competition. This report will first
describe the competition and the members of Team Swarm. After this
introduction, it focuses on three major aspects of the experience: the
hardware and software architecture of the team’s computer cluster, the
team’s system administration workflow and the team’s usage of cloud
resources. Additionally, the appendix provides a brief description of
the team members and their method of preparation.Undergraduat
- …