30 research outputs found

    Architecture de sécurité de bout en bout et mécanismes d'autoprotection pour les environnements Cloud

    Get PDF
    Since several years the virtualization of infrastructures became one of the major research challenges, consuming less energy while delivering new services. However, many attacks hinder the global adoption of Cloud computing. Self-protection has recently raised growing interest as possible element of answer to the cloud computing infrastructure protection challenge. Yet, previous solutions fall at the last hurdle as they overlook key features of the cloud, by lack of flexible security policies, cross-layered defense, multiple control granularities, and open security architectures. This thesis presents VESPA, a self-protection architecture for cloud infrastructures. Flexible coordination between self-protection loops allows enforcing a rich spectrum of security strategies. A multi-plane extensible architecture also enables simple integration of commodity security components.Recently, some of the most powerful attacks against cloud computing infrastructures target the Virtual Machine Monitor (VMM). In many case, the main attack vector is a poorly confined device driver. Current architectures offer no protection against such attacks. This thesis proposes an altogether different approach by presenting KungFuVisor, derived from VESPA, a framework to build self-defending hypervisors. The result is a very flexible self-protection architecture, enabling to enforce dynamically a rich spectrum of remediation actions over different parts of the VMM, also facilitating defense strategy administration. We showed the application to three different protection scheme: virus infection, mobile clouds and hypervisor drivers. Indeed VESPA can enhance cloud infrastructure securityLa virtualisation des infrastructures est devenue un des enjeux majeurs dans la recherche, qui fournissent des consommations d'énergie moindres et des nouvelles opportunités. Face à de multiples menaces et des mécanismes de défense hétérogènes, l'approche autonomique propose une gestion simplifiée, robuste et plus efficace de la sécurité du cloud. Aujourd'hui, les solutions existantes s'adaptent difficilement. Il manque des politiques de sécurité flexibles, une défense multi-niveaux, des contrôles à granularité variable, ou encore une architecture de sécurité ouverte. Ce mémoire présente VESPA, une architecture d'autoprotection pour les infrastructures cloud. VESPA est construit autour de politiques qui peuvent réguler la sécurité à plusieurs niveaux. La coordination flexible entre les boucles d'autoprotection réalise un large spectre de stratégies de sécurité comme des détections et des réactions sur plusieurs niveaux. Une architecture extensible multi plans permet d'intégrer simplement des éléments déjà présents. Depuis peu, les attaques les plus critiques contre les infrastructures cloud visent la brique la plus sensible: l'hyperviseur. Le vecteur d'attaque principal est un pilote de périphérique mal confiné. Les mécanismes de défense mis en jeu sont statiques et difficile à gérer. Nous proposons une approche différente avec KungFuVisor, un canevas logiciel pour créer des hyperviseurs autoprotégés spécialisant l'architecture VESPA. Nous avons montré son application à trois types de protection différents : les attaques virales, la gestion hétérogène multi-domaines et l'hyperviseur. Ainsi la sécurité des infrastructures cloud peut être améliorée grâce à VESP

    An Empirical Analysis of Cyber Deception Systems

    Get PDF

    Professional English. Fundamentals of Software Engineering

    Get PDF
    Посібник містить оригінальні тексти фахового змісту, які супроводжуються термінологічним тематичним вокабуляром та вправами різного методичного спрямування. Для студентів, які навчаються за напрямами підготовки: «Програмна інженерія», «Комп’ютерні науки» «Комп’ютерна інженерія»

    Novel graph analytics for enhancing data insight

    No full text
    Graph analytics is a fast growing and significant field in the visualization and data mining community, which is applied on numerous high-impact applications such as, network security, finance, and health care, providing users with adequate knowledge across various patterns within a given system. Although a series of methods have been developed in the past years for the analysis of unstructured collections of multi-dimensional points, graph analytics has only recently been explored. Despite the significant progress that has been achieved recently, there are still many open issues in the area, concerning not only the performance of the graph mining algorithms, but also producing effective graph visualizations in order to enhance human perception. The current thesis deals with the investigation of novel methods for graph analytics, in order to enhance data insight. Towards this direction, the current thesis proposes two methods so as to perform graph mining and visualization. Based on previous works related to graph mining, the current thesis suggests a set of novel graph features that are particularly efficient in identifying the behavioral patterns of the nodes on the graph. The specific features proposed, are able to capture the interaction of the neighborhoods with other nodes on the graph. Moreover, unlike previous approaches, the graph features introduced herein, include information from multiple node neighborhood sizes, thus capture long-range correlations between the nodes, and are able to depict the behavioral aspects of each node with high accuracy. Experimental evaluation on multiple datasets, shows that the use of the proposed graph features for the graph mining procedure, provides better results than the use of other state-of-the-art graph features. Thereafter, the focus is laid on the improvement of graph visualization methods towards enhanced human insight. In order to achieve this, the current thesis uses non-linear deformations so as to reduce visual clutter. Non-linear deformations have been previously used to magnify significant/cluttered regions in data or images for reducing clutter and enhancing the perception of patterns. Extending previous approaches, this work introduces a hierarchical approach for non-linear deformation that aims to reduce visual clutter by magnifying significant regions, and leading to enhanced visualizations of one/two/three-dimensional datasets. In this context, an energy function is utilized, which aims to determine the optimal deformation for every local region in the data, taking the information from multiple single-layer significance maps into consideration. The problem is subsequently transformed into an optimization problem for the minimization of the energy function under specific spatial constraints. Extended experimental evaluation provides evidence that the proposed hierarchical approach for the generation of the significance map surpasses current methods, and manages to effectively identify significant regions and deliver better results. The thesis is concluded with a discussion outlining the major achievements of the current work, as well as some possible drawbacks and other open issues of the proposed approaches that could be addressed in future works.Open Acces

    Cyber Security and Critical Infrastructures 2nd Volume

    Get PDF
    The second volume of the book contains the manuscripts that were accepted for publication in the MDPI Special Topic "Cyber Security and Critical Infrastructure" after a rigorous peer-review process. Authors from academia, government and industry contributed their innovative solutions, consistent with the interdisciplinary nature of cybersecurity. The book contains 16 articles, including an editorial that explains the current challenges, innovative solutions and real-world experiences that include critical infrastructure and 15 original papers that present state-of-the-art innovative solutions to attacks on critical systems

    Forensic analysis of computer evidence

    Get PDF
    Digital forensics is the science involved in the discovery, preservation, and analysis of evidence on digital devices. The end goal of digital forensics is to determine the events that occurred, who performed them, and how were they performed. In order for an investigation to lead to a sound conclusion, it must demonstrate that it is the product of sound scientific methodology. Digital forensics is inundated with many problems. These problems include an insufficient number of capable examiners, without a standard for certification there is a lack of training for examiners and current tools are unable to deal with the more complex cases, and lack of intelligent automation. This work perpetuates the ability of computer science principles to digital forensics creates a basis of acceptance for digital forensics in both the legal and forensic science community. This work focuses on three solutions. In terms of education, there is a lack of mandatory standardization, certification, and accreditation. Currently, there is a lack of standards in the interpretation of forensic evidence. The current techniques used by forensic investigators during analysis generally involve ad-hoc methods based on the vague and untested understanding of the system. These forensic techniques are the root of the significant differences in the testimony conducted by digital forensic expert witnesses. Lastly, digital forensic expert witness testimony is under great scrutiny because of the lack of standards in both education and investigative methods. To remedy this situation, we developed multiple avenues to facilitate more effective investigations. To improve the availability and standardization of education, we developed a multidisciplinary digital forensics curriculum. To improve the standards of forensic evidence interpretation, we developed a methodology based on graph theory to develop a logical view of low-level forensic data. To improve the admissibility of evidence, we developed a methodology to assign a likelihood to the hypotheses determined by forensic investigators. Together, these methods significantly improve the effectiveness of digital forensic investigations. Overall, this work calls the computer science community to join forces with the digital forensics community in order to develop, test and implement established computer science methodology in the application of digital forensics

    Diverse Intrusion-tolerant Systems

    Get PDF
    Over the past 20 years, there have been indisputable advances on the development of Byzantine Fault-Tolerant (BFT) replicated systems. These systems keep operational safety as long as at most f out of n replicas fail simultaneously. Therefore, in order to maintain correctness it is assumed that replicas do not suffer from common mode failures, or in other words that replicas fail independently. In an adversarial setting, this requires that replicas do not include similar vulnerabilities, or otherwise a single exploit could be employed to compromise a significant part of the system. The thesis investigates how this assumption can be substantiated in practice by exploring diversity when managing the configurations of replicas. The thesis begins with an analysis of a large dataset of vulnerability information to get evidence that diversity can contribute to failure independence. In particular, we used the data from a vulnerability database to devise strategies for building groups of n replicas with different Operating Systems (OS). Our results demonstrate that it is possible to create dependable configurations of OSes, which do not share vulnerabilities over reasonable periods of time (i.e., a few years). Then, the thesis proposes a new design for a firewall-like service that protects and regulates the access to critical systems, and that could benefit from our diversity management approach. The solution provides fault and intrusion tolerance by implementing an architecture based on two filtering layers, enabling efficient removal of invalid messages at early stages in order to decrease the costs associated with BFT replication in the later stages. The thesis also presents a novel solution for managing diverse replicas. It collects and processes data from several data sources to continuously compute a risk metric. Once the risk increases, the solution replaces a potentially vulnerable replica by another one, trying to maximize the failure independence of the replicated service. Then, the replaced replica is put on quarantine and updated with the available patches, to be prepared for later re-use. We devised various experiments that show the dependability gains and performance impact of our prototype, including key benchmarks and three BFT applications (a key-value store, our firewall-like service, and a blockchain).Unidade de investigação LASIGE (UID/CEC/00408/2019) e o projeto PTDC/EEI-SCR/1741/2041 (Abyss
    corecore