9 research outputs found

    Indicators of Malicious SSL Connections

    Get PDF
    Internet applications use SSL to provide data confidentiality to communicating entities. The use of encryption in SSL makes it impossible to distinguish between benign and malicious connections as the content cannot be inspected. Therefore, we propose and evaluate a set of indicators for malicious SSL connections, which is based on the unencrypted part of SSL (i.e., the SSL handshake protocol). We provide strong evidence for the strength of our indicators to identify malicious connections by cross-checking on blacklists from professional services. Besides the confirmation of prior research results through our indicators, we also found indications for a potential (not yet blacklisted) botnet on SSL. We consider the analysis of such SSL threats as highly relevant and hope that our findings stimulate the research community to further study this direction. © Springer International Publishing Switzerland 2015

    Privacy-Preserving Verification of Clinical Research

    Get PDF
    We treat the problem of privacy-preserving statistics verification in clinical research. We show that given aggregated results from statistical calculations, we can verify their correctness efficiently, without revealing any of the private inputs used for the calculation. Our construction is based on the primitive of Secure Multi-Party Computation from Shamir's Secret Sharing. Basically, our setting involves three parties: a hospital, which owns the private inputs, a clinical researcher, who lawfully processes the sensitive data to produce an aggregated statistical result, and a third party (usually several verifiers) assigned to verify this result for reliability and transparency reasons. Our solution guarantees that these verifiers only learn about the aggregated results (and what can be inferred from those about the underlying private data) and nothing more. By taking advantage of the particular scenario at hand (where certain intermediate results, e.g., the mean over the dataset, are available in the clear) and utilizing secret sharing primitives, our approach turns out to be practically efficient, which we underpin by performing several experiments on real patient data. Our results show that the privacy-preserving verification of the most commonly used statistical operations in clinical research presents itself as an important use case, where the concept of secure multi-party computation becomes employable in practice

    HeadPrint:Detecting Anomalous Communications through Header-based APplication Fingerprinting

    Get PDF
    Passive application fingerprinting is a technique to detect anomalous outgoing connections. By monitoring the network traffic, a security monitor passively learns the network characteristics of the applications installed on each machine, and uses them to detect the presence of new applications (e.g., malware infection).In this work, we propose HeadPrint, a novel passive fingerprint-ing approach that relies only on two orthogonal network header characteristics to distinguish applications, namely the order of the headers and their associated values. Our approach automatically identifies the set of characterizing headers, without relying on a predetermined set of header features. We implement HeadPrint, evaluate it in a real-world environment and we compare it with the state-of-the-art solution for passive application fingerprinting. We demonstrate our approach to be, on average, 20% more accurate and 30% more resilient to application updates than the state-of-the-art. Finally, we evaluate our approach in the setting of anomaly detection, and we show that HeadPrint is capable of detecting the presence of malicious communication, while generating significantly fewer false alarms than existing solutions

    Private Sharing of IOCs and Sightings

    Get PDF
    Information sharing helps to better protect computer systems against digital threats and known attacks. However, since security information is usually considered sensitive, parties are hesitant to share all their information through public channels. Instead, they only exchange this information with parties with whom they already established trust relationships. We propose the use of two complementary techniques to allow parties to share information without the need to immediately reveal private information. We consider a cryptographic approach to hide the details of an indicator of compromise so that it can be shared with other parties. These other parties are still able to detect intrusions with these cryptographic indicators. Additionally, we apply another cryptographic construction to let parties report back their number of sightings to a central party. This central party can aggregate the messages from the various parties to learn the total number of sightings for each indicator, without learning the number of sightings from each individual party. An evaluation of our open-source proof-of-concept implementations shows that both techniques incur only little overhead, making the techniques prime candidates for practice

    Repeated Knowledge Distillation with Confidence Masking to Mitigate Membership Inference Attacks

    Get PDF
    Machine learning models are often trained on sensitive data, such as medical records or bank transactions, posing high privacy risks. In fact, membership inference attacks can use the model parameters or predictions to determine whether a given data point was part of the training set. One of the most promising mitigations in literature is Knowledge Distillation (KD). This mitigation consists of first training a teacher model on the sensitive private dataset, and then transferring the teacher knowledge to a student model, by the mean of a surrogate dataset. The student model is then deployed in place of the teacher model. Unfortunately, KD on its own does not provide users much flexibility, meant as the possibility to arbitrarily decide how much utility to sacrifice to get membership-privacy. To address this problem, we propose a novel approach that combines KD with confidence score masking. Concretely, we repeat the distillation procedure multiple times in series and, during each distillation, perturb the teacher predictions using confidence masking techniques. We show that our solution provides more flexibility than standard KD, as it allows users to tune the number of distillation rounds and the strength of the masking function. We implement our approach in a tool, RepKD, and assess our mitigation against white-and black-box attacks on multiple models and datasets. Even when the surrogate dataset is different from the private one (which we believe to be a more realistic setting than is commonly found in literature), our mitigation is able to make the black-box attack completely ineffective and significantly reduce the accuracy of the white-box attack at the cost of only 0.6% test accuracy loss

    DECANTeR: DEteCtion of Anomalous outbouNd HTTP TRaffic by Passive Application Fingerprinting

    No full text
    We present DECANTeR, a system to detect anomalous outbound HTTP communication, which passively extracts fingerprints for each application running on a monitored host. The goal of our system is to detect unknown malware and backdoor communication indicated by unknown fingerprints extracted from a host's network traffic. We evaluate a prototype with realistic data from an international organization and datasets composed of malicious traffic. We show that our system achieves a false positive rate of 0.9% for 441 monitored host machines, an average detection rate of 97.7%, and that it cannot be evaded by malware using simple evasion techniques such as using known browser user agent values. We compare our solution with DUMONT [24], the current state-of-the-art IDS which detects HTTP covert communication channels by focusing on benign HTTP traffic. The results show that DECANTeR outperforms DUMONT in terms of detection rate, false positive rate, and even evasion-resistance. Finally, DECANTeR detects 96.8% of information stealers in our dataset, which shows its potential to detect data exfiltration

    Victim-Aware Adaptive Covert Channels

    No full text
    We investigate the problem of detecting advanced covert channel techniques, namely victim-aware adaptive covert channels. An adaptive covert channel is considered victim-aware when the attacker mimics the content of its victim’s legitimate communication, such as application-layer metadata, in order to evade detection from a security monitor. In this paper, we show that victim-aware adaptive covert channels break the underlying assumptions of existing covert channel detection solutions, thereby exposing a lack of detection mechanisms against this threat. We first propose a toolchain, Chameleon, to create synthetic datasets containing victim-aware adaptive covert channel traffic. Armed with Chameleon, we evaluate state-of-the-art detection solutions and we show that they fail to effectively detect stealthy attacks. The design of detection techniques against these stealthy attacks is challenging because their network characteristics are similar to those of benign traffic. We explore a deception-based detection technique that we call HoneyTraffic, which generates network messages containing honey tokens, while mimicking the victim’s communication. Our approach detects victim-aware adaptive covert channels by observing inconsistencies in such tokens, which are induced by the attacker attempting to mimic the victim’s traffic. Although HoneyTraffic has limitations in detecting victim-aware adaptive covert channels, it complements existing detection methods and, in combination with them, it can to make evasion harder for an attacker
    corecore