3 research outputs found
Towards Identifying Human Actions, Intent, and Severity of APT Attacks Applying Deception Techniques -- An Experiment
Attacks by Advanced Persistent Threats (APTs) have been shown to be difficult
to detect using traditional signature- and anomaly-based intrusion detection
approaches. Deception techniques such as decoy objects, often called honey
items, may be deployed for intrusion detection and attack analysis, providing
an alternative to detect APT behaviours. This work explores the use of honey
items to classify intrusion interactions, differentiating automated attacks
from those which need some human reasoning and interaction towards APT
detection. Multiple decoy items are deployed on honeypots in a virtual honey
network, some as breadcrumbs to detect indications of a structured manual
attack. Monitoring functionality was created around Elastic Stack with a Kibana
dashboard created to display interactions with various honey items. APT type
manual intrusions are simulated by an experienced pentesting practitioner
carrying out simulated attacks. Interactions with honey items are evaluated in
order to determine their suitability for discriminating between automated tools
and direct human intervention. The results show that it is possible to
differentiate automatic attacks from manual structured attacks; from the nature
of the interactions with the honey items. The use of honey items found in the
honeypot, such as in later parts of a structured attack, have been shown to be
successful in classification of manual attacks, as well as towards providing an
indication of severity of the attack
Authentication and Data Protection under Strong Adversarial Model
We are interested in addressing a series of existing and plausible threats to cybersecurity where the adversary possesses unconventional attack capabilities. Such unconventionality includes, in our exploration but not limited to, crowd-sourcing, physical/juridical coercion, substantial (but bounded) computational resources, malicious insiders, etc. Our studies show that unconventional adversaries can be counteracted with a special anchor of trust and/or a paradigm shift on a case-specific basis.
Complementing cryptography, hardware security primitives are the last defense in the face of co-located (physical) and privileged (software) adversaries, hence serving as the special trust anchor. Examples of hardware primitives are architecture-shipped features (e.g., with CPU or chipsets), security chips or tokens, and certain features on peripheral/storage devices. We also propose changes of paradigm in conjunction with hardware primitives, such as containing attacks instead of counteracting, pretended compliance, and immunization instead of detection/prevention.
In this thesis, we demonstrate how our philosophy is applied to cope with several exemplary scenarios of unconventional threats, and elaborate on the prototype systems we have implemented. Specifically, Gracewipe is designed for stealthy and verifiable secure deletion of on-disk user secrets under coercion; Hypnoguard protects in-RAM data when a computer is in sleep (ACPI S3) in case of various memory/guessing attacks; Uvauth mitigates large-scale human-assisted guessing attacks by receiving all login attempts in an indistinguishable manner, i.e., correct credentials in a legitimate session and incorrect ones in a plausible fake session; Inuksuk is proposed to protect user files against ransomware or other authorized tampering. It augments the hardware access control on self-encrypting drives with trusted execution to achieve data immunization. We have also extended the Gracewipe scenario to a network-based enterprise environment, aiming to address slightly different threats, e.g., malicious insiders.
We believe the high-level methodology of these research topics can contribute to advancing the security research under strong adversarial assumptions, and the promotion of software-hardware orchestration in protecting execution integrity therein
Explicit Authentication Response Considered Harmful ∗
Automated online password guessing attacks are facilitated by the fact that most user authentication techniques provide a yes/no answer as the result of an authentication attempt. These attacks are somewhat restricted by Automated Turing Tests (ATTs, e.g., captcha challenges) that attempt to mandate human assistance. ATTs are not very difficult for legitimate users, but always pose an inconvenience. Several current ATT implementations are also found to be vulnerable to improved image processing algorithms. ATTs can be made more complex for automated software, but that is limited by the trade-off between user-friendliness and effectiveness of ATTs. As attackers gain control of large-scale botnets, relay the challenge to legitimate users at compromised websites, or even have ready access to cheap, sweat-shop human solvers for defeating ATTs, online guessing attacks are becoming a greater security risk. Using deception techniques (as in honeypots), we propose the user-verifiable authentication scheme (Uvauth) that tolerates, instead of detecting or counteracting, guessing attacks. Uvauth provides access to all authentication attempts; the correct password enables access to a legitimate session with valid user data, and all incorrect passwords lead to fake sessions. Legitimate users are expected to learn the authentication outcome implicitly from the presented user data, and are relieved from answering ATTs; the authentication result never leaves the server and thus remains (directly) inaccessible to attackers. In addition, we suggest using adapted distorted images and pre-registered images/text as a complement to convey an authentication response, especially for accounts that do not host much personal data