24 research outputs found

    Data set and machine learning models for the classification of network traffic originators

    Get PDF
    The widespread adoption of encryption in computer network traffic is increasing the difficulty of analyzing such traffic for security purposes. The data set presented in this data article is composed of network statistics computed on captures of TCP flows, originated by executing various network stress and web crawling tools, along with statistics of benign web browsing traffic. Furthermore, this data article describes a set of Machine Learning models, trained using the described data set, which can classify network traffic by the tool category (network stress tool, web crawler, web browser), the specific tool (e.g., Firefox), and also the tool version (e.g., Firefox 68) used to generate it. These models are compatible with the analysis of traffic with encrypted payload since statistics are evaluated only on the TCP headers of the packets. The data presented in this article can be useful to train and assess the performance of new Machine Learning models for tool classification

    Empirical assessment of the effort needed to attack programs protected with client/server code splitting

    Get PDF
    Context. Code hardening is meant to fight malicious tampering with sensitive code executed on client hosts. Code splitting is a hardening technique that moves selected chunks of code from client to server. Although widely adopted, the effective benefits of code splitting are not fully understood and thoroughly assessed. Objective. The objective of this work is to compare non protected code vs. code splitting protected code, considering two levels of the chunk size parameter, in order to assess the effectiveness of the protection - in terms of both attack time and success rate - and to understand the attack strategy and process used to overcome the protection. Method. We conducted an experiment with master students performing attack tasks on a small application hardened with different levels of protection. Students carried out their task working at the source code level. Results. We observed a statistically significant effect of code splitting on the attack success rate that, on the average, was reduced from 89% with unprotected clear code to 52% with the most effective protection. The protection variant that moved some small-sized code chunks turned out to be more effective than the alternative moving fewer but larger chunks. Different strategies were identified yielding different success rates. Moreover, we discovered that successful attacks exhibited different process w.r.t. failed ones.Conclusions We found empirical evidence of the effect of code splitting, assessed the relative magnitude, and evaluated the influence of the chunk size parameter. Moreover, we extracted the process used to overcome such obfuscation technique

    A meta-model for software protections and reverse engineering attacks

    Get PDF
    Software protection techniques are used to protect valuable software assets against man-at-the-end attacks. Those attacks include reverse engineering to steal confidential assets, and tampering to break the software’s integrity in unauthorized ways. While their ultimate aims are the original assets, attackers also target the protections along their attack path. To allow both humans and tools to reason about the strength of available protections (and combinations thereof) against potential attacks on concrete applications and their assets, i.e., to assess the true strength of layered protections, all relevant and available knowledge on the relations between the relevant aspects of protections, attacks, applications, and assets need to be collected, structured, and formalized. This paper presents a software protection meta-model that can be instantiated to construct a formal knowledge base that holds precisely that information. The presented meta-model is validated against existing models and taxonomies in the domain of software protection, and by means of prototype tools that we developed to help non-modelling-expert software defenders with populating a knowledge base and with extracting and inferring practically useful information from it. All discussed tools are available as open source, and we evaluate their use as part of a software protection work flow on an open source application and industrial use cases

    A Model for Automated Cybersecurity Threat Remediation and Sharing

    No full text
    This paper presents an approach to the automatic remediation of threats reported by Cyber Threat Intelligence. Remediation strategies, named Recipes, are expressed in a close-to-natural language for easy validation. Thanks to the developed models, they are interpreted, contextualized, and then translated into CACAO Security playbooks, a standard format ready for automatic enforcement, without human intervention. The presented approach also allows sharing of remediation procedures on threat-sharing platforms (e.g. MISP) which improves the overall security posture. The effectiveness of the approach has been tested in the context of two EC-funded projects

    A model of capabilities of Network Security Functions

    No full text
    This paper presents a formal model of the features, named security capabilities, offered by the controls used for enforcing security policies in computer networks. It has been designed to support policy refinement and policy translation and address useful, practical tasks in a vendor-independent manner. The model adopts state-of-the-art design patterns and has been designed to be extensible. The model describes the actions that the controls can perform (e.g. deny packets or encrypt flows), the conditions to select on what to apply the actions, how to compose valid configuration rules from them, and how to build configurations from rules. It proved effective to model filtering controls and iptables

    Non-interventional, retrospective data of long-term home parenteral nutrition in patients with benign diseases: Analysis of a nurse register (SERECARE)

    No full text
    Objectives: The aim of this study was to evaluate the safety and efficacy of home parenteral nutrition (HPN) service in patients with benign chronic intestinal failure (CIF). Methods: This was a 10-y retrospective, non-interventional, multicenter study conducted with adult and pediatric patients with CIF who received HPN service. We analyzed data prospectively collected from a dedicated register by HPN nurses. Results: From January 2002 to December 2011 a total of 794 patients (49.7% male, median age 1 y for children and 57 y for adults) were included in the analysis. Over the 10-y period, 723 central venous catheter (CVC) complications occurred, of which 394 were infectious (54.5%), 297 were mechanical (41.1%), and 32 (3.3%) were defined as CVC-related thrombosis. The complication rate was higher in children (1.11 per patient) than in adults (0.70 per patient). During the observation period, the rates of both infectious and mechanical complications showed a global declining trend and 3c75% of patients had neither infectious nor mechanical CVC complications. HPN efficacy was evaluated in 301 patients with a minimum follow-up of 36 mo. Body mass index and Karnofsky score showed that the median growth significantly increased (P < 0.001) over baseline for adults and pediatric patients in the 0 to 2 age range. Conclusions: The use of a structured register has proved to be a key strategy for monitoring the outcomes of long-term treatment, improving time efficiency, and preventing potential malpractice. To our knowledge, this is largest survey ever documented; the results were consistent despite the heterogeneity of the centers because of duly applied standard rules and protocols

    The EEE experiment project: status and first physics results

    No full text
    The Extreme Energy Events Project is an experiment for the detection of Extensive Air Showers which exploits the Multigap Resistive Plate Chamber technology. At the moment 40 EEE muon telescopes, distributed all over the Italian territory, are taking data, allowing the relative analysis to produce the first interesting results, which are reported here. Moreover, this Project has a strong added value thanks to its effectiveness in terms of scientific communication, which derives from the peculiar way it was planned and carried on
    corecore