1,639 research outputs found

    Asymptotic Approximations for TCP Compound

    Full text link
    In this paper, we derive an approximation for throughput of TCP Compound connections under random losses. Throughput expressions for TCP Compound under a deterministic loss model exist in the literature. These are obtained assuming the window sizes are continuous, i.e., a fluid behaviour is assumed. We validate this model theoretically. We show that under the deterministic loss model, the TCP window evolution for TCP Compound is periodic and is independent of the initial window size. We then consider the case when packets are lost randomly and independently of each other. We discuss Markov chain models to analyze performance of TCP in this scenario. We use insights from the deterministic loss model to get an appropriate scaling for the window size process and show that these scaled processes, indexed by p, the packet error rate, converge to a limit Markov chain process as p goes to 0. We show the existence and uniqueness of the stationary distribution for this limit process. Using the stationary distribution for the limit process, we obtain approximations for throughput, under random losses, for TCP Compound when packet error rates are small. We compare our results with ns2 simulations which show a good match.Comment: Longer version for NCC 201

    Analysis of Multiple Flows using Different High Speed TCP protocols on a General Network

    Full text link
    We develop analytical tools for performance analysis of multiple TCP flows (which could be using TCP CUBIC, TCP Compound, TCP New Reno) passing through a multi-hop network. We first compute average window size for a single TCP connection (using CUBIC or Compound TCP) under random losses. We then consider two techniques to compute steady state throughput for different TCP flows in a multi-hop network. In the first technique, we approximate the queues as M/G/1 queues. In the second technique, we use an optimization program whose solution approximates the steady state throughput of the different flows. Our results match well with ns2 simulations.Comment: Submitted to Performance Evaluatio

    SUTMS - Unified Threat Management Framework for Home Networks

    Get PDF
    Home networks were initially designed for web browsing and non-business critical applications. As infrastructure improved, internet broadband costs decreased, and home internet usage transferred to e-commerce and business-critical applications. Today’s home computers host personnel identifiable information and financial data and act as a bridge to corporate networks via remote access technologies like VPN. The expansion of remote work and the transition to cloud computing have broadened the attack surface for potential threats. Home networks have become the extension of critical networks and services, hackers can get access to corporate data by compromising devices attacked to broad- band routers. All these challenges depict the importance of home-based Unified Threat Management (UTM) systems. There is a need of unified threat management framework that is developed specifically for home and small networks to address emerging security challenges. In this research, the proposed Smart Unified Threat Management (SUTMS) framework serves as a comprehensive solution for implementing home network security, incorporating firewall, anti-bot, intrusion detection, and anomaly detection engines into a unified system. SUTMS is able to provide 99.99% accuracy with 56.83% memory improvements. IPS stands out as the most resource-intensive UTM service, SUTMS successfully reduces the performance overhead of IDS by integrating it with the flow detection mod- ule. The artifact employs flow analysis to identify network anomalies and categorizes encrypted traffic according to its abnormalities. SUTMS can be scaled by introducing optional functions, i.e., routing and smart logging (utilizing Apriori algorithms). The research also tackles one of the limitations identified by SUTMS through the introduction of a second artifact called Secure Centralized Management System (SCMS). SCMS is a lightweight asset management platform with built-in security intelligence that can seamlessly integrate with a cloud for real-time updates

    AUTOMATED CYBER OPERATIONS MISSION DATA REPLAY

    Get PDF
    The Persistent Cyber Training Environment (PCTE) has been developed as the joint force solution to provide a single training environment for cyberspace operations. PCTE offers a closed network for Joint Cyberspace Operations Forces, which provides a range of training solutions from individual sustainment training to mission rehearsal and post-operation analysis. Currently, PCTE does not have the ability to replay previously executed training scenarios or external scenarios. Replaying cyber mission data on a digital twin virtual network within PCTE would support operator training as well as enable development and testing of new strategies for offensive and defensive cyberspace operations. A necessary first step in developing such a tool is to acquire network specifications for a target network, or to extract network specifications from a cyber mission data set. This research developed a program design and proof-of-concept tool, Automated Cyber Operations Mission Data Replay (ACOMDR), to extract a portion of the network specifications necessary to instantiate a digital twin network within PCTE from cyber mission data. From this research, we were able to identify key areas for future work to increase the fidelity of the network specification and replay cyber events within PCTE.Captain, United States Marine CorpsApproved for public release. Distribution is unlimited

    GAUMLESS: Modelling the Capitalization of Human Action on the Internet

    Get PDF
    The focus of this thesis is on a field of study related to information design, namely visual modelling, and the application of its concepts and frameworks to a case study on the use of Internet cookies. It represents an opportunity to enhance information design’s relevancy as an adaptive discipline; i.e., borrowing and learning from various knowledge domains in representing phenomena for the purposes of decision-making and action-generation. As a critical design project, the thesis endeavors to inform Internet users and other audiences of the exploitation inherent in the data-mining processes employed by websites for generating cookies and to expose the risks to users. This focus was motivated by a concern with the ignorance, or at least the casual awareness, of many Internet users of the implications of giving their consent to the use of cookies. The thesis employs a qualitative research methodology that consolidates information design principles, conventions and processes; a distillation of relevant modelling frameworks; and pan-disciplinary philosophical perspectives (i.e., cybernetics, systems theory, and social system theory) into a visual model that represents the cookie system. The significance of this study’s contribution to design theory lies in the manner in which boundaries to its research methodology (based on the study’s purpose, goals and targeted audience) were determined and the singular visual modelling process developed in consideration of the myriad relevant knowledge-domains, extensive data sources and esoteric technical aspects of the system under study. Whereas simplification in a visual model is a key factor for knowledge-creation and establishing usability, its effectiveness to inform and inspire is also measured by its level of accuracy and comprehensiveness. In concentrating on human behaviour and decision-making contexts and applications, information design has the capacity to help meet personal and social needs and consequently can be a societal force for innovation and progress. The thesis’ visual model is an example of this potential in its intention to represent the cookie process and to raise awareness of its personal and social implications. The study validates the responsibility of the information designer to not prescribe actions or solutions but rather to impart knowledge, support decision-making, and inspire critical reflection

    Security and Privacy in Unified Communication

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.The use of unified communication; video conferencing, audio conferencing, and instant messaging has skyrocketed during the COVID-19 pandemic. However, security and privacy considerations have often been neglected. This paper provides a comprehensive survey of security and privacy in Unified Communication (UC). We systematically analyze security and privacy threats and mitigations in a generic UC scenario. Based on this, we analyze security and privacy features of the major UC market leaders and we draw conclusions on the overall UC landscape. While confidentiality in communication channels is generally well protected through encryption, other privacy properties are mostly lacking on UC platforms

    RDMA mechanisms for columnar data in analytical environments

    Get PDF
    Dissertação de mestrado integrado em Engenharia InformáticaThe amount of data in information systems is growing constantly and, as a consequence, the complexity of analytical processing is greater. There are several storage solutions to persist this information, with different architectures targeting different use cases. For analytical processing, storage solutions with a column-oriented format are particularly relevant due to the convenient placement of the data in persistent storage and the closer mapping to in-memory processing. The access to the database is typically remote and has overhead associated, mainly when it is necessary to obtain the same data multiple times. Thus, it is desirable to have a cache on the processing side and there are solutions for this. The problem with the existing so lutions is the overhead introduced by network latency and memory-copy between logical layers. Remote Direct Memory Access (RDMA) mechanisms have the potential to help min imize this overhead. Furthermore, this type of mechanism is indicated for large amounts of data because zero-copy has more impact as the data volume increases. One of the problems associated with RDMA mechanisms is the complexity of development. This complexity is induced by its different development paradigm when compared to other network commu nication protocols, for example, TCP. Aiming to improve the efficiency of analytical processing, this dissertation presents a dis tributed cache that takes advantage of RDMA mechanisms to improve analytical processing performance. The cache abstracts the intricacies of RDMA mechanisms and is developed as a middleware making it transparent to take advantage of this technology. Moreover, this technique could be used in other contexts where a distributed cache makes sense, such as a set of replicated web servers that access the same database.A quantidade de informação nos sistemas informáticos tem vindo a aumentar e consequentemente, a complexidade do processamento analítico torna-se maior. Existem diversas soluções para o armazenamento de dados com diferentes arquiteturas e indicadas para determinados casos de uso. Num contexto de processamento analítico, uma solução com o modelo de dados colunar e especialmente relevante devido à disposição conveniente dos dados em disco e a sua proximidade com o mapeamento em memória desses mesmos dados. Muitas vezes, o acesso aos dados é feito remotamente e isso traz algum overhead, principalmente quando é necessário aceder aos mesmos dados mais do que uma vez. Posto isto, é vantajoso fazer caching dos dados e já existem soluções para esse efeito. O overhead introduzido pela latência da rede e cópia de buffers entre camadas lógicas é o principal problema das soluções existentes. Os mecanismos de acesso direto à memória remota (RDMA - Remote Direct Memory Access) tem o potencial de melhorar o desempenho neste cenário. Para além disso, este tipo de tecnologia faz sentido em sistemas com grandes quantidades de dados, nos quais o acesso direto pode ter um impacto ainda maior por ser zero-copy. Um dos problemas associados com mecanismos RDMA é a complexidade de desenvolvimento. Esta complexidade é causada pelo paradigma de desenvolvimento completamente diferente de outros protocolos de comunicação, como por exemplo, TCP. Tendo em vista melhorar a eficiência do processamento analítico, esta dissertação propõe uma solução de cache distribuída que tira partido de mecanismos de acesso direto a memoria remota (RDMA). A cache abstrai as particularidades dos mecanismos RDMA e é disponibilizada como middleware, tornando a utilização desta tecnologia completamente transparente. Esta solução visa os sistemas de processamento analítico, mas poderá ser utilizada noutros contextos em que uma cache distribuída faça sentido, como por exemplo num conjunto de servidores web replicados que acedem a mesma base de dados

    Evaluation of Active Queue Management (AQM) Models in Low Latency Networks

    Get PDF
    Abstract: Low latency networks require the modification of the actual queuing management in order to avoid large queuing delay. Nowadays, TCP’s congestion control maximizes the throughput of the link providing benefits to large flow packets. However, nodes’ buffers may get fully filled, which would produce large time delays and packet dropping situations, named as bufferbloat problem. For actual time-sensitive applications demand, such as VoIP, online gaming or financial trading, these queueing times cause bad quality of service being directly noticed in user’s utilization. This work studies the different alternatives for active queue management (AQM) in the nodes links, optimizing the latency of the small flow packets and, therefore, providing better quality for low latency networks in congestion scenarios. AQM models are simulated in a dumbbell topology with ns3 software, which shows the diverse latency values (measured in RTT) according to network situations and the algorithm that has been installed. In detail, RED, CoDel, PIE, and FQ_CoDel algorithms are studied, plus the modification of the TCP sender’s congestion control with Alternative Backoff with ECN (ABE) algorithm. The simulations will display the best queueing times for the implementation that mixes FQ_CoDel with ABE, the one which maximizes the throughput reducing the latency of the packets. Thus, the modification of queueing management with FQ_CoDel and the implementation of ABE in the sender will solve the bufferbloat problem offering the required quality for low latency networks.Resumen Las redes de baja latencia requieren la modificación de la actual gestión de las colas con el fin de eludir los extensos tiempos de retardo. Hoy en d´ıa, el control de congestión de TCP maximiza el rendimiento (throughput) del enlace otorgando beneficio a los grandes flujos de datos, sin embargo, los buffers son plenamente cargados generando altos tiempos de retardo y fases de retirada de paquetes, llamada a esta situación el problema de Bufferbloat. Par las aplicaciones contempor´aneas como las llamadas VoIP, los juegos on-line o los intercambios financieros; estos tiempos de cola generan una mala calidad de servicio detectada directamente por los usuarios finales. Este trabajo estudia las diferentes alternativas de la gestión activa de colas (AQM), optimizando la latencia de los peque˜nos flujos y, por lo tanto, brindando una mejor calidad para las redes de baja latencia en situaciones de congestión. Los modelos AQM han sido evaluados en una topolog´ıa ’dumbbell’ mediante el simulador ns3, entregando resultados de latencia (medidos en RTT) de acuerdo con la situación del enlace y el algoritmo instalado en la cola. Concretamente, los algoritmos estudiados han sido RED, CoDel, PIE y FQ_CoDel; adem´as de la modificación del control de congestión TCP del emisor denominada ABE (Alternative Backoff with ECN). Las simulaciones que mejor resultados ofrecen son las que implementan combinación de FQ_CoDel con el algoritmo ABE, maximizando el rendimiento y reduciendo la latencia de los paquetes. Por lo tanto, la modificación con FQ_CoDel en las colas y la de ABE en el emisor ofrecen una solución al problema del Bufferbloat altamente solicitada por las redes de baja latencia

    Deteção de propagação de ameaças e exfiltração de dados em redes empresariais

    Get PDF
    Modern corporations face nowadays multiple threats within their networks. In an era where companies are tightly dependent on information, these threats can seriously compromise the safety and integrity of sensitive data. Unauthorized access and illicit programs comprise a way of penetrating the corporate networks, able to traversing and propagating to other terminals across the private network, in search of confidential data and business secrets. The efficiency of traditional security defenses are being questioned with the number of data breaches occurred nowadays, being essential the development of new active monitoring systems with artificial intelligence capable to achieve almost perfect detection in very short time frames. However, network monitoring and storage of network activity records are restricted and limited by legal laws and privacy strategies, like encryption, aiming to protect the confidentiality of private parties. This dissertation proposes methodologies to infer behavior patterns and disclose anomalies from network traffic analysis, detecting slight variations compared with the normal profile. Bounded by network OSI layers 1 to 4, raw data are modeled in features, representing network observations, and posteriorly, processed by machine learning algorithms to classify network activity. Assuming the inevitability of a network terminal to be compromised, this work comprises two scenarios: a self-spreading force that propagates over internal network and a data exfiltration charge which dispatch confidential info to the public network. Although features and modeling processes have been tested for these two cases, it is a generic operation that can be used in more complex scenarios as well as in different domains. The last chapter describes the proof of concept scenario and how data was generated, along with some evaluation metrics to perceive the model’s performance. The tests manifested promising results, ranging from 96% to 99% for the propagation case and 86% to 97% regarding data exfiltration.Nos dias de hoje, várias organizações enfrentam múltiplas ameaças no interior da sua rede. Numa época onde as empresas dependem cada vez mais da informação, estas ameaças podem compremeter seriamente a segurança e a integridade de dados confidenciais. O acesso não autorizado e o uso de programas ilícitos constituem uma forma de penetrar e ultrapassar as barreiras organizacionais, sendo capazes de propagarem-se para outros terminais presentes no interior da rede privada com o intuito de atingir dados confidenciais e segredos comerciais. A eficiência da segurança oferecida pelos sistemas de defesa tradicionais está a ser posta em causa devido ao elevado número de ataques de divulgação de dados sofridos pelas empresas. Desta forma, o desenvolvimento de novos sistemas de monitorização ativos usando inteligência artificial é crucial na medida de atingir uma deteção mais precisa em curtos períodos de tempo. No entanto, a monitorização e o armazenamento dos registos da atividade da rede são restritos e limitados por questões legais e estratégias de privacidade, como a cifra dos dados, visando proteger a confidencialidade das entidades. Esta dissertação propõe metodologias para inferir padrões de comportamento e revelar anomalias através da análise de tráfego que passa na rede, detetando pequenas variações em comparação com o perfil normal de atividade. Delimitado pelas camadas de rede OSI 1 a 4, os dados em bruto são modelados em features, representando observações de rede e, posteriormente, processados por algoritmos de machine learning para classificar a atividade de rede. Assumindo a inevitabilidade de um terminal ser comprometido, este trabalho compreende dois cenários: um ataque que se auto-propaga sobre a rede interna e uma tentativa de exfiltração de dados que envia informações para a rede pública. Embora os processos de criação de features e de modelação tenham sido testados para estes dois casos, é uma operação genérica que pode ser utilizada em cenários mais complexos, bem como em domínios diferentes. O último capítulo inclui uma prova de conceito e descreve o método de criação dos dados, com a utilização de algumas métricas de avaliação de forma a espelhar a performance do modelo. Os testes mostraram resultados promissores, variando entre 96% e 99% para o caso da propagação e entre 86% e 97% relativamente ao roubo de dados.Mestrado em Engenharia de Computadores e Telemátic
    corecore