80 research outputs found

    Towards Effective Trust-Based Packet Filtering in Collaborative Network Environments

    Get PDF

    Towards Bayesian-Based Trust Management for Insider Attacks in Healthcare Software-Defined Networks

    Get PDF
    © 2004-2012 IEEE. The medical industry is increasingly digitalized and Internet-connected (e.g., Internet of Medical Things), and when deployed in an Internet of Medical Things environment, software-defined networks (SDNs) allow the decoupling of network control from the data plane. There is no debate among security experts that the security of Internet-enabled medical devices is crucial, and an ongoing threat vector is insider attacks. In this paper, we focus on the identification of insider attacks in healthcare SDNs. Specifically, we survey stakeholders from 12 healthcare organizations (i.e., two hospitals and two clinics in Hong Kong, two hospitals and two clinics in Singapore, and two hospitals and two clinics in China). Based on the survey findings, we develop a trust-based approach based on Bayesian inference to figure out malicious devices in a healthcare environment. Experimental results in either a simulated and a real-world network environment demonstrate the feasibility and effectiveness of our proposed approach regarding the detection of malicious healthcare devices, i.e., our approach could decrease the trust values of malicious devices faster than similar approaches

    Detecção de anomalias na partilha de ficheiros em ambientes empresariais

    Get PDF
    File sharing is the activity of making archives (documents, videos, photos) available to other users. Enterprises use file sharing to make archives available to their employees or clients. The availability of these files can be done through an internal network, cloud service (external) or even Peer-to-Peer (P2P). Most of the time, the files within the file sharing service have sensitive information that cannot be disclosed. Equifax data breach attack exploited a zero-day attack that allowed arbitrary code execution, leading to a huge data breach as over 143 million user information was presumed compromised. Ransomware is a type of malware that encrypts computer data (documents, media, ...) making it inaccessible to the user, demanding a ransom for the decryption of the data. This type of malware has been a serious threat to enterprises. WannaCry and NotPetya are some examples of ransomware that had a huge impact on enterprises with big amounts of ransoms, for example WannaCry reached more than 142,361.51inransoms.Inthisdissertation,wepurposeasystemthatcandetectfilesharinganomalieslikeransomware(WannaCry,NotPetya)andtheft(Equifaxbreach),andalsotheirpropagation.Thesolutionconsistsofnetworkmonitoring,thecreationofcommunicationprofilesforeachuser/machine,ananalysisalgorithmusingmachinelearningandacountermeasuremechanismincaseananomalyisdetected.Partilhadeficheiroseˊaatividadededisponibilizarficheiros(documentos,vıˊdeos,fotos)autilizadores.Asempresasusamapartilhadeficheirosparadisponibilizarficheirosaosseusutilizadoresetrabalhadores.Adisponibilidadedestesficheirospodeserfeitaapartirdeumaredeinterna,servic\codenuvem(externo)ouateˊPontoaPonto.Normalmente,osficheiroscontidosnoservic\codepartilhadeficheirosconte^mdadosconfidenciaisquena~opodemserdivulgados.Oataquedeviolac\ca~odedadosrealizadoaEquifaxexplorouumavulnerabilidadedediazeroquepermitiuexecuc\ca~odecoˊdigoarbitraˊrio,levandoaqueainformac\ca~ode143milho~esdeutilizadoresfossecomprometida.Ransomwareeˊumtipodemalwarequecifraosdadosdocomputador(documentos,multimeˊdia...)tornandoosinacessıˊveisaoutilizador,exigindoaesteumresgateparadecifraressesdados.Estetipodemalwaretemsidoumagrandeameac\caaˋsempresasatuais.WannaCryeNotPetyasa~oalgunsexemplosdeRansomwarequetiveramumgrandeimpactocomgrandesquantiasderesgate,WannaCryalcanc\coumaisde142,361.51 in ransoms. In this dissertation, we purpose a system that can detect file sharing anomalies like ransomware (WannaCry, NotPetya) and theft (Equifax breach), and also their propagation. The solution consists of network monitoring, the creation of communication profiles for each user/machine, an analysis algorithm using machine learning and a countermeasure mechanism in case an anomaly is detected.Partilha de ficheiros é a atividade de disponibilizar ficheiros (documentos, vídeos, fotos) a utilizadores. As empresas usam a partilha de ficheiros para disponibilizar ficheiros aos seus utilizadores e trabalhadores. A disponibilidade destes ficheiros pode ser feita a partir de uma rede interna, serviço de nuvem (externo) ou até Ponto-a-Ponto. Normalmente, os ficheiros contidos no serviço de partilha de ficheiros contêm dados confidenciais que não podem ser divulgados. O ataque de violação de dados realizado a Equifax explorou uma vulnerabilidade de dia zero que permitiu execução de código arbitrário, levando a que a informação de 143 milhões de utilizadores fosse comprometida. Ransomware é um tipo de malware que cifra os dados do computador (documentos, multimédia...) tornando-os inacessíveis ao utilizador, exigindo a este um resgate para decifrar esses dados. Este tipo de malware tem sido uma grande ameaça às empresas atuais. WannaCry e NotPetya são alguns exemplos de Ransomware que tiveram um grande impacto com grandes quantias de resgate, WannaCry alcançou mais de 142,361.51 em resgates. Neste tabalho, propomos um sistema que consiga detectar anomalias na partilha de ficheiros, como o ransomware (WannaCry, NotPetya) e roubo de dados (violação de dados Equifax), bem como a sua propagação. A solução consiste na monitorização da rede da empresa, na criação de perfis para cada utilizador/máquina, num algoritmo de machine learning para análise dos dados e num mecanismo que bloqueie a máquina afetada no caso de se detectar uma anomalia.Mestrado em Engenharia de Computadores e Telemátic

    Methods and Techniques for Dynamic Deployability of Software-Defined Security Services

    Get PDF
    With the recent trend of “network softwarisation”, enabled by emerging technologies such as Software-Defined Networking and Network Function Virtualisation, system administrators of data centres and enterprise networks have started replacing dedicated hardware-based middleboxes with virtualised network functions running on servers and end hosts. This radical change has facilitated the provisioning of advanced and flexible network services, ultimately helping system administrators and network operators to cope with the rapid changes in service requirements and networking workloads. This thesis investigates the challenges of provisioning network security services in “softwarised” networks, where the security of residential and business users can be provided by means of sets of software-based network functions running on high performance servers or on commodity devices. The study is approached from the perspective of the telecom operator, whose goal is to protect the customers from network threats and, at the same time, maximize the number of provisioned services, and thereby revenue. Specifically, the overall aim of the research presented in this thesis is proposing novel techniques for optimising the resource usage of software-based security services, hence for increasing the chances for the operator to accommodate more service requests while respecting the desired level of network security of its customers. In this direction, the contributions of this thesis are the following: (i) a solution for the dynamic provisioning of security services that minimises the utilisation of computing and network resources, and (ii) novel methods based on Deep Learning and Linux kernel technologies for reducing the CPU usage of software-based security network functions, with specific focus on the defence against Distributed Denial of Service (DDoS) attacks. The experimental results reported in this thesis demonstrate that the proposed solutions for service provisioning and DDoS defence require fewer computing resources, compared to similar approaches available in the scientific literature or adopted in production networks

    An Analysis of Network Flow Records for Inferring Web Browser Redirection

    Get PDF
    Legitimate web browser redirection is often used to take users to web pages that have moved or to help users find the correct website when they have entered the web address incorrectly. Unfortunately, computer network attackers can use web browser redirection to manage malware-serving hosts and conceal their activity. An analysis of network flow records yields heuristics for flow size, flow duration, and inter-flow duration that indicate flows where web browser redirection is likely to have occurred. Results show that flows matching these redirection heuristics are indeed several times more likely to communicate with Internet hosts that have exhibited a history of malicious behavior. A network security administrator can thus filter large sets of network flow records to reveal flows most likely to contain web browser redirection. This capability reduces the sample space when looking for evidence of malicious activity targeting web browsers and contributes more generally to the expanding field of flow-based application recognition

    A comparative experimental design and performance analysis of Snort-based Intrusion Detection System in practical computer networks

    Get PDF
    As one of the most reliable technologies, network intrusion detection system (NIDS) allows the monitoring of incoming and outgoing traffic to identify unauthorised usage and mishandling of attackers in computer network systems. To this extent, this paper investigates the experimental performance of Snort-based NIDS (S-NIDS) in a practical network with the latest technology in various network scenarios including high data speed and/or heavy traffic and/or large packet size. An effective testbed is designed based on Snort using different muti-core processors, e.g., i5 and i7, with different operating systems, e.g., Windows 7, Windows Server and Linux. Furthermore, considering an enterprise network consisting of multiple virtual local area networks (VLANs), a centralised parallel S-NIDS (CPS-NIDS) is proposed with the support of a centralised database server to deal with high data speed and heavy traffic. Experimental evaluation is carried out for each network configuration to evaluate the performance of the S-NIDS in different network scenarios as well as validating the effectiveness of the proposed CPS-NIDS. In particular, by analysing packet analysis efficiency, an improved performance of up to 10% is shown to be achieved with Linux over other operating systems, while up to 8% of improved performance can be achieved with i7 over i5 processors

    A comparative experimental design and performance analysis of Snort-based Intrusion Detection System in practical computer networks

    Get PDF
    As one of the most reliable technologies, network intrusion detection system (NIDS) allows the monitoring of incoming and outgoing traffic to identify unauthorised usage and mishandling of attackers in computer network systems. To this extent, this paper investigates the experimental performance of Snort-based NIDS (S-NIDS) in a practical network with the latest technology in various network scenarios including high data speed and/or heavy traffic and/or large packet size. An effective testbed is designed based on Snort using different muti-core processors, e.g., i5 and i7, with different operating systems, e.g., Windows 7, Windows Server and Linux. Furthermore, considering an enterprise network consisting of multiple virtual local area networks (VLANs), a centralised parallel S-NIDS (CPS-NIDS) is proposed with the support of a centralised database server to deal with high data speed and heavy traffic. Experimental evaluation is carried out for each network configuration to evaluate the performance of the S-NIDS in different network scenarios as well as validating the effectiveness of the proposed CPS-NIDS. In particular, by analysing packet analysis efficiency, an improved performance of up to 10% is shown to be achieved with Linux over other operating systems, while up to 8% of improved performance can be achieved with i7 over i5 processors

    Load Balance and Resource Efficiency in Communication Networks

    Get PDF
    Network management is critical for today’s network. This study investigates both load balancing and resource efficiency in network management. For load balancing, one unfavorable situation is that the active traffic uses a portion of the equal-cost paths instead of all. The root causes of load imbalance are not easily identified and located by network operators. Most research work related in this area concerns the design of load balancing mechanisms or network-wide troubleshooting that does not specify the causes of load imbalance. In this study, we describe a computational framework based on network measurements to identify the correlation mechanism causing the load imbalance. We also describe a novel framework based on Coprime to mitigate the load imbalance brought by hash correlations. In evaluation based on real network trace data and topologies, we have proved that we can reduces the error (CV or K-S statistic) by at least one magnitude. For resource efficiency, today’s network demands an increasing switch memory to support the essential functions, such as forwarding, monitoring, etc. However, the cache memory is restricted when processing data streams in which the input is presented as a sequence of items and can be examined in only a few passes (typically just one). This study introduces a new single-pass reservoir weighted-sampling stream aggregation algorithm, Priority-Based Aggregation (PBA). A naive approach to order sample regardless of key then aggregate the results is hopelessly inefficient. In distinction, our proposed algorithm uses a single persistent random variable across the lifetime of each key in the cache and maintains unbiased estimates of the key aggregates that can be queried at any point in the stream. Concerning statistical properties, we prove that PBA provides unbiased estimates of the true aggregates. We analyze the computational complexity of PBA and its variants and provide a detailed evaluation of its accuracy on synthetic and trace data. In addition to sampling, this study also considers placing classification rules into switches from various network functions. While much work has focused on compressing the rules, most of this work proposes solutions operating in the memory of a single switch. Instead, this study proposed a collaborative approach encompassing switches and network functions. This architecture enables trade-off between usage of (expensive) switch memory and (cheaper) downstream network bandwidth and network function resources. Our system can reduce memory usage significantly compared to strawman approaches as demonstrated with extensive simulations and prototype evaluation with real traffic traces and rules

    Timely processing of big data in collaborative large-scale distributed systems

    Get PDF
    Today’s Big Data phenomenon, characterized by huge volumes of data produced at very high rates by heterogeneous and geographically dispersed sources, is fostering the employment of large-scale distributed systems in order to leverage parallelism, fault tolerance and locality awareness with the aim of delivering suitable performances. Among the several areas where Big Data is gaining increasing significance, the protection of Critical Infrastructure is one of the most strategic since it impacts on the stability and safety of entire countries. Intrusion detection mechanisms can benefit a lot from novel Big Data technologies because these allow to exploit much more information in order to sharpen the accuracy of threats discovery. A key aspect for increasing even more the amount of data at disposal for detection purposes is the collaboration (meant as information sharing) among distinct actors that share the common goal of maximizing the chances to recognize malicious activities earlier. Indeed, if an agreement can be found to share their data, they all have the possibility to definitely improve their cyber defenses. The abstraction of Semantic Room (SR) allows interested parties to form trusted and contractually regulated federations, the Semantic Rooms, for the sake of secure information sharing and processing. Another crucial point for the effectiveness of cyber protection mechanisms is the timeliness of the detection, because the sooner a threat is identified, the faster proper countermeasures can be put in place so as to confine any damage. Within this context, the contributions reported in this thesis are threefold * As a case study to show how collaboration can enhance the efficacy of security tools, we developed a novel algorithm for the detection of stealthy port scans, named R-SYN (Ranked SYN port scan detection). We implemented it in three distinct technologies, all of them integrated within an SR-compliant architecture that allows for collaboration through information sharing: (i) in a centralized Complex Event Processing (CEP) engine (Esper), (ii) in a framework for distributed event processing (Storm) and (iii) in Agilis, a novel platform for batch-oriented processing which leverages the Hadoop framework and a RAM-based storage for fast data access. Regardless of the employed technology, all the evaluations have shown that increasing the number of participants (that is, increasing the amount of input data at disposal), allows to improve the detection accuracy. The experiments made clear that a distributed approach allows for lower detection latency and for keeping up with higher input throughput, compared with a centralized one. * Distributing the computation over a set of physical nodes introduces the issue of improving the way available resources are assigned to the elaboration tasks to execute, with the aim of minimizing the time the computation takes to complete. We investigated this aspect in Storm by developing two distinct scheduling algorithms, both aimed at decreasing the average elaboration time of the single input event by decreasing the inter-node traffic. Experimental evaluations showed that these two algorithms can improve the performance up to 30%. * Computations in online processing platforms (like Esper and Storm) are run continuously, and the need of refining running computations or adding new computations, together with the need to cope with the variability of the input, requires the possibility to adapt the resource allocation at runtime, which entails a set of additional problems. Among them, the most relevant concern how to cope with incoming data and processing state while the topology is being reconfigured, and the issue of temporary reduced performance. At this aim, we also explored the alternative approach of running the computation periodically on batches of input data: although it involves a performance penalty on the elaboration latency, it allows to eliminate the great complexity of dynamic reconfigurations. We chose Hadoop as batch-oriented processing framework and we developed some strategies specific for dealing with computations based on time windows, which are very likely to be used for pattern recognition purposes, like in the case of intrusion detection. Our evaluations provided a comparison of these strategies and made evident the kind of performance that this approach can provide
    corecore