14 research outputs found

    Detection and Mitigation of Steganographic Malware

    Get PDF
    A new attack trend concerns the use of some form of steganography and information hiding to make malware stealthier and able to elude many standard security mechanisms. Therefore, this Thesis addresses the detection and the mitigation of this class of threats. In particular, it considers malware implementing covert communications within network traffic or cloaking malicious payloads within digital images. The first research contribution of this Thesis is in the detection of network covert channels. Unfortunately, the literature on the topic lacks of real traffic traces or attack samples to perform precise tests or security assessments. Thus, a propaedeutic research activity has been devoted to develop two ad-hoc tools. The first allows to create covert channels targeting the IPv6 protocol by eavesdropping flows, whereas the second allows to embed secret data within arbitrary traffic traces that can be replayed to perform investigations in realistic conditions. This Thesis then starts with a security assessment concerning the impact of hidden network communications in production-quality scenarios. Results have been obtained by considering channels cloaking data in the most popular protocols (e.g., TLS, IPv4/v6, and ICMPv4/v6) and showcased that de-facto standard intrusion detection systems and firewalls (i.e., Snort, Suricata, and Zeek) are unable to spot this class of hazards. Since malware can conceal information (e.g., commands and configuration files) in almost every protocol, traffic feature or network element, configuring or adapting pre-existent security solutions could be not straightforward. Moreover, inspecting multiple protocols, fields or conversations at the same time could lead to performance issues. Thus, a major effort has been devoted to develop a suite based on the extended Berkeley Packet Filter (eBPF) to gain visibility over different network protocols/components and to efficiently collect various performance indicators or statistics by using a unique technology. This part of research allowed to spot the presence of network covert channels targeting the header of the IPv6 protocol or the inter-packet time of generic network conversations. In addition, the approach based on eBPF turned out to be very flexible and also allowed to reveal hidden data transfers between two processes co-located within the same host. Another important contribution of this part of the Thesis concerns the deployment of the suite in realistic scenarios and its comparison with other similar tools. Specifically, a thorough performance evaluation demonstrated that eBPF can be used to inspect traffic and reveal the presence of covert communications also when in the presence of high loads, e.g., it can sustain rates up to 3 Gbit/s with commodity hardware. To further address the problem of revealing network covert channels in realistic environments, this Thesis also investigates malware targeting traffic generated by Internet of Things devices. In this case, an incremental ensemble of autoencoders has been considered to face the ''unknown'' location of the hidden data generated by a threat covertly exchanging commands towards a remote attacker. The second research contribution of this Thesis is in the detection of malicious payloads hidden within digital images. In fact, the majority of real-world malware exploits hiding methods based on Least Significant Bit steganography and some of its variants, such as the Invoke-PSImage mechanism. Therefore, a relevant amount of research has been done to detect the presence of hidden data and classify the payload (e.g., malicious PowerShell scripts or PHP fragments). To this aim, mechanisms leveraging Deep Neural Networks (DNNs) proved to be flexible and effective since they can learn by combining raw low-level data and can be updated or retrained to consider unseen payloads or images with different features. To take into account realistic threat models, this Thesis studies malware targeting different types of images (i.e., favicons and icons) and various payloads (e.g., URLs and Ethereum addresses, as well as webshells). Obtained results showcased that DNNs can be considered a valid tool for spotting the presence of hidden contents since their detection accuracy is always above 90% also when facing ''elusion'' mechanisms such as basic obfuscation techniques or alternative encoding schemes. Lastly, when detection or classification are not possible (e.g., due to resource constraints), approaches enforcing ''sanitization'' can be applied. Thus, this Thesis also considers autoencoders able to disrupt hidden malicious contents without degrading the quality of the image

    Enhancing security in public IaaS cloud systems through VM monitoring: a consumer’s perspective

    Get PDF
    Cloud computing is attractive for both consumers and providers to benefit from potential economies of scale in reducing cost of use (for consumers) and operation of infrastructure (for providers). In the IaaS service deployment model of the cloud, consumers can launch their own virtual machines (VMs) on an infrastructure made available by a cloud provider, enabling a number of different applications to be hosted within the VM. The cloud provider generally has full control and access to the VM, providing the potential for a provider to access both VM configuration parameters and the hosted data. Trust between the consumer and the provider is key in this context, and generally assumed to exist. However, relying on this assumption alone can be limiting. We argue that the VM owner must have greater access to operations that are being carried out on their VM by the provider and greater visibility on how this VM and its data are stored and processed in the cloud. In the case where VMs are migrated by the provider to another region, without notifying the owner, this can raise some privacy concerns. Therefore, mechanisms must be in place to ensure that violation of the confidentiality, integrity and SLA does not happen. In this thesis, we present a number of contributions in the field of cloud security which aim at supporting trustworthy cloud computing. We propose monitoring of security-related VM events as a solution to some of the cloud security challenges. Therefore, we present a system design and architecture to monitor security-related VM events in public IaaS cloud systems. To enable the system to achieve focused monitoring, we propose a taxonomy of security-related VM events. The architecture was supported by a prototype implementation of the monitoring tool called: VMInformant, which keeps the user informed and alerted about various events that have taken place on their VM. The tool was evaluated to learn about the performance and storage overheads associated with monitoring such events using CPU and I/O intensive benchmarks. Since events in multiple VMs, belonging to the same owner, may be related, we suggested an architecture of a system, called: Inspector Station, to aggregate and analyse events from multiple VMs. This system enables the consumer: (1) to learn about the overall security status of multiple VMs; (2) to find patterns in the events; and (3) to make informed decisions related to security. To ensure that VMs are not migrated to another region without notifying the owner, we proposed a hybrid approach, which combines multiple metrics to estimate the likelihood of a migration event. The technical aspects in this thesis are backed up by practical experiments to evaluate the approaches in real public IaaS cloud systems, e.g. Amazon AWS and Google Cloud Platform. We argue that having this level of transparency is essential to improve the trust between a cloud consumer and provider, especially in the context of a public cloud system

    A Corpus-based Register Analysis of Corporate Blogs – Text Types and Linguistic Features

    Get PDF
    A main theme in sociolinguistics is register variation, a situation and use dependent variation of language. Numerous studies have provided evidence of linguistic variation across situations of use in English. However, very little attention has been paid to the language of corporate blogs (CBs), which is often seen as an emerging genre of computer-mediated communication (CMC). Previous studies on blogs and corporate blogs have provided important information about their linguistic features as well as functions; however, our understanding of the linguistic variation in corporate blogs remains limited in particular ways, because many of these previous studies have focused on individual linguistic features, rather than how features interact and what the possible relations between forms (linguistic features) and functions are. Given these limitations, it would be necessary to have a more systematic perspective on linguistic variation in corporate blogs. In order to study register variation in corporate blogs more systematically, a combined framework rooted in Systemic Functional Linguistics (SFL), and register theories (e.g., Biber, 1988, 1995; Halliday & Hasan, 1989) is adopted. This combination is based on some common grounds they share, which concern the functional view of language, co-occurrence patterns of linguistic features, and the importance of large corpora to linguistic research. Guided by this framework, this thesis aims to: 1) investigate the functional linguistic variations in corporate blogs, and identify the text types that are distinguished linguistically, as well as how the CB text types cut across CB industry-categories, and 2) to identify salient linguistic differences across text types in corporate blogs in the configuration of the three components of the context of situation - field, tenor, and mode of discourse. In order to achieve these goals, a 590,520-word corpus consisting of 1,020 textual posts from 41 top-ranked corporate blogs is created and mapped onto the combined framework which consists of Biber’s multi-dimensional (MD) approach and Halliday’s SFL. Accordingly, two sets of empirical analyses are conducted one after another in this research project. At first, by using a corpus-based MD approach which applies multivariate statistical techniques (including factor analysis and cluster analysis) to the investigation of register variation, CB text types are identified; and then, some linguistic features, including the most common verbs and their process types, personal pronouns, modals, lexical density, and grammatical complexity, are selected from language metafunctions of mode, tenor and field within the SFL framework, and their linguistic differences across different text types are analysed. The results of these analyses not only show that the corporate blog is a hybrid genre, representing a combination of various text types, which serve to achieve different communicative purposes and functional goals, but also exhibit a close relationship between certain text types and particular industries, which means the CB texts categorized into a certain text type are mainly from a particular industry. On this basis, the lexical and grammatical features (i.e., the most common verbs, pronouns, modal verbs, lexical density and grammatical complexity) associated with Halliday’s metafunctions are further explored and compared across six text types. It is found that language features which are related to field, tenor and mode in corporate blogs demonstrate a dynamic nature: centring on an interpersonal function, the online blogs in a business setting are basically used for the purposes of sales, customer relationship management and branding. This research project contributes to the existing field of knowledge in the following ways: Firstly, it develops the methodology used in corpus investigation of language variation, and paves the way for further research into corporate blogs and other forms of electronic communication and, more generally, for researchers engaging in corpus-based investigations of other language varieties. Secondly, it adds greatly to a description of corporate blog as a language variety in its own right, which includes different text types identified in CB discourse, and some linguistic features realized in the context of situation. This highlights the fact that corporate blogs cannot be regarded as a simple discourse; rather, they vary according to text types and context of situation

    The impact of microservices: an empirical analysis of the emerging software architecture

    Get PDF
    Dissertação de mestrado em Informatics EngineeringThe applications’ development paradigm has faced changes in recent years, with modern development being characterized by the need to continuously deliver new software iterations. With great affinity with those principles, microservices is a software architecture which features characteristics that potentially promote multiple quality attributes often required by modern, large-scale applications. Its recent growth in popularity and acceptance in the industry made this architectural style often described as a form of modernizing applications that allegedly solves all the traditional monolithic applications’ inconveniences. However, there are multiple worth mentioning costs associated with its adoption, which seem to be very vaguely described in existing empirical research, being often summarized as "the complexity of a distributed system". The adoption of microservices provides the agility to achieve its promised benefits, but to actually reach them, several key implementation principles have to be honored. Given that it is still a fairly recent approach to developing applications, the lack of established principles and knowledge from development teams results in the misjudgment of both costs and values of this architectural style. The outcome is often implementations that conflict with its promised benefits. In order to implement a microservices-based architecture that achieves its alleged benefits, there are multiple patterns and methodologies involved that add a considerable amount of complexity. To evaluate its impact in a concrete and empirical way, one same e-commerce platform was developed from scratch following a monolithic architectural style and two architectural patterns based on microservices, featuring distinct inter-service communication and data management mechanisms. The effort involved in dealing with eventual consistency, maintaining a communication infrastructure, and managing data in a distributed way portrayed significant overheads not existent in the development of traditional applications. Nonetheless, migrating from a monolithic architecture to a microservicesbased is currently accepted as the modern way of developing software and this ideology is not often contested, nor the involved technical challenges are appropriately emphasized. Sometimes considered over-engineering, other times necessary, this dissertation contributes with empirical data from insights that showcase the impact of the migration to microservices in several topics. From the trade-offs associated with the use of specific patterns, the development of the functionalities in a distributed way, and the processes to assure a variety of quality attributes, to performance benchmarks experiments and the use of observability techniques, the entire development process is described and constitutes the object of study of this dissertation.O paradigma de desenvolvimento de aplicações tem visto alterações nos últimos anos, sendo o desenvolvimento moderno caracterizado pela necessidade de entrega contínua de novas iterações de software. Com grande afinidade com esses princípios, microsserviços são uma arquitetura de software que conta com características que potencialmente promovem múltiplos atributos de qualidade frequentemente requisitados por aplicações modernas de grandes dimensões. O seu recente crescimento em popularidade e aceitação na industria fez com que este estilo arquitetural se comumente descrito como uma forma de modernizar aplicações que alegadamente resolve todos os inconvenientes apresentados por aplicações monolíticas tradicionais. Contudo, existem vários custos associados à sua adoção, aparentemente descritos de forma muito vaga, frequentemente sumarizados como a "complexidade de um sistema distribuído". A adoção de microsserviços fornece a agilidade para atingir os seus benefícios prometidos, mas para os alcançar, vários princípios de implementação devem ser honrados. Dado que ainda se trata de uma forma recente de desenvolver aplicações, a falta de princípios estabelecidos e conhecimento por parte das equipas de desenvolvimento resulta em julgamentos errados dos custos e valores deste estilo arquitetural. O resultado geralmente são implementações que entram em conflito com os seus benefícios prometidos. De modo a implementar uma arquitetura baseada em microsserviços com os benefícios prometidos existem múltiplos padrões que adicionam considerável complexidade. De modo a avaliar o impacto dos microsserviços de forma concreta e empírica, foi desenvolvida uma mesma plataforma e-commerce de raiz segundo uma arquitetura monolítica e duas arquitetura baseadas em microsserviços, contando com diferentes mecanismos de comunicação entre os serviços. O esforço envolvido em lidar com consistência eventual, manter a infraestrutura de comunicação e gerir os dados de uma forma distribuída representaram desafios não existentes no desenvolvimento de aplicações tradicionais. Apesar disso, a ideologia de migração de uma arquitetura monolítica para uma baseada em microsserviços é atualmente aceite como a forma moderna de desenvolver aplicações, não sendo frequentemente contestada nem os seus desafios técnicos são apropriadamente enfatizados. Por vezes considerado overengineering, outras vezes necessário, a presente dissertação visa contribuir com dados práticos relativamente ao impacto da migração para arquiteturas baseadas em microsserviços em diversos tópicos. Desde os trade-offs envolvidos no uso de padrões específicos, o desenvolvimento das funcionalidades de uma forma distribuída e nos processos para assegurar uma variedade de atributos de qualidade, até análise de benchmarks de performance e uso de técnicas de observabilidade, todo o desenvolvimento é descrito e constitui o objeto de estudo da dissertação

    Agentless Cloud-Wide Streaming of Guest File System Updates

    No full text

    Network operator intent : a basis for user-friendly network configuration and analysis

    Get PDF
    Two important network management activities are configuration (making the network behave in a desirable way) and analysis (querying the network’s state). A challenge common to these activities is specifying operator intent. Seemingly simple configurations such as “no network user should exceed their allocated bandwidth” or questions like “how many network devices are in the library?” are difficult to formulate in practice, e.g. they may require multiple tools (like access control lists, firewalls, databases, or accounting software) and a detailed knowledge of the network. This requires a high degree of expertise and experience, and even then, mistakes are common. An understanding of the core concepts that network operators manipulate and analyse is needed so that more effective, efficient, and user-friendly tools and processes can be created. To address this, we create a taxonomy of languages for configuring networks, and use it to evaluate three such languages to learn how operators can express their intent. We identify factors such as language features, testing, state modeling, documentation, and tool support. Then, we interview network operators to understand what they want to express. We analyse the interviews and identify nine orthogonal dimensions which frequently appear in expressions of operator intent. We use these concepts, and our taxonomy, as the basis for a language for querying both business- and network-domain data. We evaluate our language and find that it reduces the number and complexity of queries needed to answer questions about networks. We also conduct a user study, and find that our language reduces novices’ cognitive load while increasing their accuracy and efficiency. With our language, users better understand how to approach questions, can more easily express themselves, and make fewer mistakes when interpreting data. Overall, we find that operator intent can, at one extreme, be expressed directly, as primitives like flow rules, packet counters, or CLI commands, and at another extreme as human-readable statements which are automatically translated and implemented. The former gives operators precise control, but the latter may be easier to use. We also find that there is more to expressing intent than syntax and semantics as usability, redundancy, state manipulation, and ecosystems all play a role. Our findings also show the importance of incorporating business-domain concepts in network management tools. By understanding operator intent we can reduce errors, improve both human-human and human-computer communication, create more usable tools, and make network operators more effective

    Junos Pulse Secure Access Service Administration Guide

    Get PDF
    This guide describes basic configuration procedures for Juniper Networks Secure Access Secure Access Service. This document was formerly titled Secure Access Administration Guide. This document is now part of the Junos Pulse documentation set. This guide is designed for network administrators who are configuring and maintaining a Juniper Networks Secure Access Service device. To use this guide, you need a broad understanding of networks in general and the Internet in particular, networking principles, and network configuration. Any detailed discussion of these concepts is beyond the scope of this guide.The Juniper Networks Secure Access Service enable you to give employees, partners, and customers secure and controlled access to your corporate data and applications including file servers, Web servers, native messaging and e-mail clients, hosted servers, and more from outside your trusted network using just a Web browser. Secure Access Service provide robust security by intermediating the data that flows between external users and your company’s internal resources. Users gain authenticated access to authorized resources through an extranet session hosted by the appliance. During intermediation, Secure Access Service receives secure requests from the external, authenticated users and then makes requests to the internal resources on behalf of those users. By intermediating content in this way, Secure Access Service eliminates the need to deploy extranet toolkits in a traditional DMZ or provision a remote access VPN for employees. To access the intuitive Secure Access Service home page, your employees, partners, and customers need only a Web browser that supports SSL and an Internet connection. This page provides the window from which your users can securely browse Web or file servers, use HTML-enabled enterprise applications, start the client/server application proxy, begin a Windows, Citrix, or Telnet/SSH terminal session, access corporate e-mail servers, start a secured layer 3 tunnel, or schedule or attend a secure online meeting
    corecore