582 research outputs found

    Towards the simulation of cooperative perception applications by leveraging distributed sensing infrastructures

    Get PDF
    With the rapid development of Automated Vehicles (AV), the boundaries of their function alities are being pushed and new challenges are being imposed. In increasingly complex and dynamic environments, it is fundamental to rely on more powerful onboard sensors and usually AI. However, there are limitations to this approach. As AVs are increasingly being integrated in several industries, expectations regarding their cooperation ability is growing, and vehicle-centric approaches to sensing and reasoning, become hard to integrate. The proposed approach is to extend perception to the environment, i.e. outside of the vehicle, by making it smarter, via the deployment of wireless sensors and actuators. This will vastly improve the perception capabilities in dynamic and unpredictable scenarios and often in a cheaper way, relying mostly in the use of lower cost sensors and embedded devices, which rely on their scale deployment instead of centralized sensing abilities. Consequently, to support the development and deployment of such cooperation actions in a seamless way, we require the usage of co-simulation frameworks, that can encompass multiple perspectives of control and communications for the AVs, the wireless sensors and actuators and other actors in the environment. In this work, we rely on ROS2 and micro-ROS as the underlying technologies for integrating several simulation tools, to construct a framework, capable of supporting the development, test and validation of such smart, cooperative environments. This endeavor was undertaken by building upon an existing simulation framework known as AuNa. We extended its capabilities to facilitate the simulation of cooperative scenarios by incorporat ing external sensors placed within the environment rather than just relying on vehicle-based sensors. Moreover, we devised a cooperative perception approach within this framework, showcasing its substantial potential and effectiveness. This will enable the demonstration of multiple cooperation scenarios and also ease the deployment phase by relying on the same software architecture.Com o rápido desenvolvimento dos Veículos Autónomos (AV), os limites das suas funcional idades estão a ser alcançados e novos desafios estão a surgir. Em ambientes complexos e dinâmicos, é fundamental a utilização de sensores de alta capacidade e, na maioria dos casos, inteligência artificial. Mas existem limitações nesta abordagem. Como os AVs estão a ser integrados em várias indústrias, as expectativas quanto à sua capacidade de cooperação estão a aumentar, e as abordagens de perceção e raciocínio centradas no veículo, tornam-se difíceis de integrar. A abordagem proposta consiste em extender a perceção para o ambiente, isto é, fora do veículo, tornando-a inteligente, através do uso de sensores e atuadores wireless. Isto irá melhorar as capacidades de perceção em cenários dinâmicos e imprevisíveis, reduzindo o custo, pois a abordagem será baseada no uso de sensores low-cost e sistemas embebidos, que dependem da sua implementação em grande escala em vez da capacidade de perceção centralizada. Consequentemente, para apoiar o desenvolvimento e implementação destas ações em cooperação, é necessária a utilização de frameworks de co-simulação, que abranjam múltiplas perspetivas de controlo e comunicação para os AVs, sensores e atuadores wireless, e outros atores no ambiente. Neste trabalho será utilizado ROS2 e micro-ROS como as tecnologias subjacentes para a integração das ferramentas de simulação, de modo a construir uma framework capaz de apoiar o desenvolvimento, teste e validação de ambientes inteligentes e cooperativos. Esta tarefa foi realizada com base numa framework de simulação denominada AuNa. Foram expandidas as suas capacidades para facilitar a simulação de cenários cooperativos através da incorporação de sensores externos colocados no ambiente, em vez de depender apenas de sensores montados nos veículos. Além disso, concebemos uma abordagem de perceção cooperativa usando a framework, demonstrando o seu potencial e eficácia. Isto irá permitir a demonstração de múltiplos cenários de cooperação e também facilitar a fase de implementação, utilizando a mesma arquitetura de software

    Department of Computer Science Activity 1998-2004

    Get PDF
    This report summarizes much of the research and teaching activity of the Department of Computer Science at Dartmouth College between late 1998 and late 2004. The material for this report was collected as part of the final report for NSF Institutional Infrastructure award EIA-9802068, which funded equipment and technical staff during that six-year period. This equipment and staff supported essentially all of the department\u27s research activity during that period

    Development of a system compliant with the Application-Layer Traffic Optimization Protocol

    Get PDF
    Dissertação de mestrado integrado em Engenharia InformáticaWith the ever-increasing Internet usage that is following the start of the new decade, the need to optimize this world-scale network of computers becomes a big priority in the technological sphere that has the number of users rising, as are the Quality of Service (QoS) demands by applications in domains such as media streaming or virtual reality. In the face of rising traffic and stricter application demands, a better understand ing of how Internet Service Providers (ISPs) should manage their assets is needed. An important concern regards to how applications utilize the underlying network infras tructure over which they reside. Most of these applications act with little regard for ISP preferences, as exemplified by their lack of care in achieving traffic locality during their operation, which would be a preferable feature for network administrators, and that could also improve application performance. However, even a best-effort attempt by applications to cooperate will hardly succeed if ISP policies aren’t clearly commu nicated to them. Therefore, a system to bridge layer interests has much potential in helping achieve a mutually beneficial scenario. The main focus of this thesis is the Application-Layer Traffic Optimization (ALTO) work ing group, which was formed by the Internet Engineering Task Force (IETF) to explore standardizations for network information retrieval. This group specified a request response protocol where authoritative entities provide resources containing network status information and administrative preferences. Sharing of infrastructural insight is done with the intent of enabling a cooperative environment, between the network overlay and underlay, during application operations, to obtain better infrastructural re sourcefulness and the consequential minimization of the associated operational costs. This work gives an overview of the historical network tussle between applications and service providers, presents the ALTO working group’s project as a solution, im plements an extended system built upon their ideas, and finally verifies the developed system’s efficiency, in a simulation, when compared to classical alternatives.Com o acrescido uso da Internet que acompanha o início da nova década, a necessidade de otimizar esta rede global de computadores passa a ser uma grande prioridade na esfera tecnológica que vê o seu número de utilizadores a aumentar, assim como a exigência, por parte das aplicações, de novos padrões de Qualidade de Serviço (QoS), como visto em domínios de transmissão de conteúdo multimédia em tempo real e em experiências de realidade virtual. Face ao aumento de tráfego e aos padrões de exigência aplicacional mais restritos, é necessário melhor compreender como os fornecedores de serviços Internet (ISPs) devem gerir os seus recursos. Um ponto fulcral é como aplicações utilizam os seus recursos da rede, onde muitas destas não têm consideração pelas preferências dos ISPs, como exemplificado pela sua falta de esforço em localizar tráfego, onde o contrário seria preferível por administradores de rede e teria potencial para melhorar o desempenho aplicacional. Uma tentativa de melhor esforço, por parte das aplicações, em resolver este problema, não será bem-sucedida se as preferências administrativas não forem claramente comunicadas. Portanto, um sistema que sirva de ponte de comunicação entre camadas pode potenciar um cenário mutuamente benéfico. O foco principal desta tese é o grupo de trabalho Application-Layer Traffic Optimization (ALTO), que foi formado pelo Internet Engineering Task Force (IETF) para explorar estandardizações para recolha de informação da rede. Este grupo especificou um protocolo onde entidades autoritárias disponibilizam recursos com informação de estado de rede, e preferências administrativas. A partilha de conhecimento infraestrutural é feita para possibilitar um ambiente cooperativo entre redes overlay e underlay, para uma mais eficiente utilização de recursos e a consequente minimização de custos operacionais. É pretendido dar uma visão da histórica disputa entre aplicações e ISPs, assim como apresentar o projeto do grupo de trabalho ALTO como solução, implementar e melhorar sobre as suas ideias, e finalmente verificar a eficiência do sistema numa simulação, quando comparado com alternativas clássicas

    Um estudo sobre a segurança e privacidade no armazenamento de dados em nuvens

    Get PDF
    Orientador: Marco Aurélio Amaral HenriquesDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Armazenamento de dados na nuvem é um serviço que traz diversas vantagens aos seus usuários. Contudo, em sistemas de nuvens públicas, os riscos envolvidos na terceirização do armazenamento de dados pode ser uma barreira para a adoção deste serviço por aqueles preocupados com sua privacidade. Vários provedores de serviços em nuvem que afirmam proteger os dados do usuário não atendem alguns requisitos considerados essenciais em um serviço seguro, confiável e de fácil utilização, levantando questionamentos sobre a segurança efetivamente obtida. Apresentamos neste trabalho um estudo relacionado aos requisitos de privacidade dos usuários e de segurança de seus dados em nuvens públicas. O estudo apresenta algumas técnicas normalmente usadas para atender tais requisitos, juntamente com uma análise de seus benefícios e custos relativos. Além disso, ele faz uma avaliação destes requisitos em vários sistemas de nuvens públicas. Depois de comparar estes sistemas, propomos um conjunto de requisitos e apresentamos, como prova de conceito, uma aplicação baseada nos mesmos, a qual melhora a segurança dos dados e a privacidade dos usuários. Nós mostramos que é possível proteger os dados armazenados nas nuvens contra o acesso por terceiros (incluindo os administradores das nuvens) sem sobrecarregar o usuário com protocolos ou procedimentos complexos de segurança, tornando o serviço de armazenamento em nuvens uma escolha mais confiável para usuários preocupados com sua privacidadeAbstract: Cloud data storage is a service that brings several advantages for its users. However, in public cloud systems, the risks involved in the outsourcing of data storage can be a barrier to the adoption of this service by those concerned with privacy. Several cloud service providers that claim to protect user's data do not fulfill some requirements considered essential in a secure, reliable and easy to use service, raising questions about the effective security obtained. We present here a study related to user's privacy and data security requirements on public clouds. The study presents some techniques normally used to fulfill those requirements, along with an analysis of their relative costs and benefits. Moreover, it makes an evaluation of them in several public cloud systems. After comparing those systems, we propose a set of requirements and present a proof of concept application based on them, which improves data security and user privacy in public clouds. We show that it is possible to protect cloud stored data against third party (including cloud administrators) access without burdening the user with complex security protocols or procedures, making the public cloud storage service a more reliable choice to privacy concerned usersMestradoEngenharia de ComputaçãoMestre em Engenharia Elétrica153392/2014-2CNP

    Using Botnet Technologies to Counteract Network Traffic Analysis

    Get PDF
    Botnets have been problematic for over a decade. They are used to launch malicious activities including DDoS (Distributed-Denial-of-Service), spamming, identity theft, unauthorized bitcoin mining and malware distribution. A recent nation-wide DDoS attacks caused by the Mirai botnet on 10/21/2016 involving 10s of millions of IP addresses took down Twitter, Spotify, Reddit, The New York Times, Pinterest, PayPal and other major websites. In response to take-down campaigns by security personnel, botmasters have developed technologies to evade detection. The most widely used evasion technique is DNS fast-flux, where the botmaster frequently changes the mapping between domain names and IP addresses of the C&C server so that it will be too late or too costly to trace the C&C server locations. Domain names generated with Domain Generation Algorithms (DGAs) are used as the \u27rendezvous\u27 points between botmasters and bots. This work focuses on how to apply botnet technologies (fast-flux and DGA) to counteract network traffic analysis, therefore protecting user privacy. A better understanding of botnet technologies also helps us be pro-active in defending against botnets. First, we proposed two new DGAs using hidden Markov models (HMMs) and Probabilistic Context-Free Grammars (PCFGs) which can evade current detection methods and systems. Also, we developed two HMM-based DGA detection methods that can detect the botnet DGA-generated domain names with/without training sets. This helps security personnel understand the botnet phenomenon and develop pro-active tools to detect botnets. Second, we developed a distributed proxy system using fast-flux to evade national censorship and surveillance. The goal is to help journalists, human right advocates and NGOs in West Africa to have a secure and free Internet. Then we developed a covert data transport protocol to transform arbitrary message into real DNS traffic. We encode the message into benign-looking domain names generated by an HMM, which represents the statistical features of legitimate domain names. This can be used to evade Deep Packet Inspection (DPI) and protect user privacy in a two-way communication. Both applications serve as examples of applying botnet technologies to legitimate use. Finally, we proposed a new protocol obfuscation technique by transforming arbitrary network protocol into another (Network Time Protocol and a video game protocol of Minecraft as examples) in terms of packet syntax and side-channel features (inter-packet delay and packet size). This research uses botnet technologies to help normal users have secure and private communications over the Internet. From our botnet research, we conclude that network traffic is a malleable and artificial construct. Although existing patterns are easy to detect and characterize, they are also subject to modification and mimicry. This means that we can construct transducers to make any communication pattern look like any other communication pattern. This is neither bad nor good for security. It is a fact that we need to accept and use as best we can

    Towards an Automatic Microservices Manager for Hybrid Cloud Edge Environments

    Get PDF
    Cloud computing came to make computing resources easier to access thus helping a faster deployment of applications/services benefiting from the scalability provided by the service providers. It has been registered an exponential growth of the data volume received by the cloud. This is due to the fact that almost every device used in everyday life are connected to the internet sharing information in a global scale (ex: smartwatches, clocks, cars, industrial equipment’s). Increasing the data volume results in an increased latency in client applications resulting in the degradation of the QoS (Quality of service). With these problems, hybrid systems were born by integrating the cloud resources with the various edge devices between the cloud and edge, Fog/Edge computation. These devices are very heterogeneous, with different resources capabilities (such as memory and computational power), and geographically distributed. Software architectures also evolved and microservice architecture emerged to make application development more flexible and increase their scalability. The Microservices architecture comprehends decomposing monolithic applications into small services each one with a specific functionality and that can be independently developed, deployed and scaled. Due to their small size, microservices are adquate for deployment on Hybrid Cloud/Edge infrastructures. However, the heterogeneity of those deployment locations makes microservices’ management and monitoring rather complex. Monitoring, in particular, is essential when considering that microservices may be replicated and migrated in the cloud/edge infrastructure. The main problem this dissertation aims to contribute is to build an automatic system of microservices management that can be deployed in hybrid infrastructures cloud/fog computing. Such automatic system will allow edge enabled applications to have an adaptive deployment at runtime in response to variations inworkloads and computational resources available. Towards this end, this work is a first step on integrating two existing projects that combined may support an automatic system. One project does the automatic management of microservices but uses only an heavy monitor, Prometheus, as a cloud monitor. The second project is a light adaptive monitor. This thesis integrates the light monitor into the automatic manager of microservices.A computação na Cloud surgiu como forma de simplificar o acesso aos recursos computacionais, permitindo um deployment mais rápido das aplicações e serviços como resultado da escalabilidade suportada pelos provedores de serviços. Computação na cloud surgiu para facilitar o acesso aos recursos de computação provocando um facultamento no deployment de aplicações/serviços sendo benéfico para a escalabilidade fornecida pelos provedores de serviços. Tem-se registado um crescimento exponencial do volume de data que é recebido pela cloud. Este aumento deve-se ao facto de quase todos os dispositivos utilizados no nosso quotidiano estarem conectados à internet (exemplos destes são, relogios, maquinas industriais, carros). Este aumento no volume de dados resulta num aumento da latência para as aplicações cliente, resultando assim numa degradação na qualidade de serviço QoS. Com estes problemas, nasceram os sistemas híbridos, nascidos pela integração dos recursos de cloud com os variados dispositivos presentes no caminho entre a cloud e a periferia denominando-se computação na Edge/Fog (Computação na periferia). Estes dispositivos apresentam uma grande heterogeneidade e são geograficamente muito distribuídos. As arquitecturas dos sistemas também evoluíram emergindo a arquitectura de micro serviços que permitem tornar o desenvolvimento de aplicações não só mais flexivel como para aumentar a sua escalabilidade. A arquitetura de micro serviços consiste na decomposição de aplicações monolíticas em pequenos serviços, onde cada um destes possuí uma funcionalidade específica e que pode ser desenvolvido, lançado e migrado de forma independente. Devido ao seu tamanho os micro serviços são adequados para serem lançados em ambientes de infrastructuras híbridas (cloud e periferia). No entanto, a heterogeneidade da localização para serem lançados torna a gestão e monitorização de micro serviços bastante mais complexa. A monitorização, em particular, é essencial quando consideramos que os micro serviços podem ser replicados e migrados nestas infrastruturas de cloud e periferia (Edge). O problema abordado nesta dissertação é contribuir para a construção de um sistema automático de gestão de micro serviços que podem ser lançados em estruturas hibridas. Este sistema automático irá tornar possível às aplicações que estão na edge possuírem um deploy adaptativo enquanto estão em execução, como resposta às variações dos recursos computacionais disponíveis e suas cargas. Para chegar a este fim, este trabalho será o primeiro passo na integração de dois projectos já existentes que, juntos poderão suportar umsistema automático. Umdeles realiza a gestão automática de micro serviços mas utiliza apenas o Prometheus como monitor na cloud, enquanto o segundo projecto é um monitor leve adaptativo. Esta tese integra então um monitor leve com um gestor automático de micro serviços

    5G-PPP Technology Board:Delivery of 5G Services Indoors - the wireless wire challenge and solutions

    Get PDF
    The 5G Public Private Partnership (5G PPP) has focused its research and innovation activities mainly on outdoor use cases and supporting the user and its applications while on the move. However, many use cases inherently apply in indoor environments whereas their requirements are not always properly reflected by the requirements eminent for outdoor applications. The best example for indoor applications can be found is the Industry 4.0 vertical, in which most described use cases are occurring in a manufacturing hall. Other environments exhibit similar characteristics such as commercial spaces in offices, shopping malls and commercial buildings. We can find further similar environments in the media & entertainment sector, culture sector with museums and the transportation sector with metro tunnels. Finally in the residential space we can observe a strong trend for wireless connectivity of appliances and devices in the home. Some of these spaces are exhibiting very high requirements among others in terms of device density, high-accuracy localisation, reliability, latency, time sensitivity, coverage and service continuity. The delivery of 5G services to these spaces has to consider the specificities of the indoor environments, in which the radio propagation characteristics are different and in the case of deep indoor scenarios, external radio signals cannot penetrate building construction materials. Furthermore, these spaces are usually “polluted” by existing wireless technologies, causing a multitude of interreference issues with 5G radio technologies. Nevertheless, there exist cases in which the co-existence of 5G new radio and other radio technologies may be sensible, such as for offloading local traffic. In any case the deployment of networks indoors is advised to consider and be planned along existing infrastructure, like powerlines and available shafts for other utilities. Finally indoor environments expose administrative cross-domain issues, and in some cases so called non-public networks, foreseen by 3GPP, could be an attractive deployment model for the owner/tenant of a private space and for the mobile network operators serving the area. Technology-wise there exist a number of solutions for indoor RAN deployment, ranging from small cell architectures, optical wireless/visual light communication, and THz communication utilising reconfigurable intelligent surfaces. For service delivery the concept of multi-access edge computing is well tailored to host virtual network functions needed in the indoor environment, including but not limited to functions supporting localisation, security, load balancing, video optimisation and multi-source streaming. Measurements of key performance indicators in indoor environments indicate that with proper planning and consideration of the environment characteristics, available solutions can deliver on the expectations. Measurements have been conducted regarding throughput and reliability in the mmWave and optical wireless communication cases, electric and magnetic field measurements, round trip latency measurements, as well as high-accuracy positioning in laboratory environment. Overall, the results so far are encouraging and indicate that 5G and beyond networks must advance further in order to meet the demands of future emerging intelligent automation systems in the next 10 years. Highly advanced industrial environments present challenges for 5G specifications, spanning congestion, interference, security and safety concerns, high power consumption, restricted propagation and poor location accuracy within the radio and core backbone communication networks for the massive IoT use cases, especially inside buildings. 6G and beyond 5G deployments for industrial networks will be increasingly denser, heterogeneous and dynamic, posing stricter performance requirements on the network. The large volume of data generated by future connected devices will put a strain on networks. It is therefore fundamental to discriminate the value of information to maximize the utility for the end users with limited network resources

    A Pattern Language for Designing Application-Level Communication Protocols and the Improvement of Computer Science Education through Cloud Computing

    Get PDF
    Networking protocols have been developed throughout time following layered architectures such as the Open Systems Interconnection model and the Internet model. These protocols are grouped in the Internet protocol suite. Most developers do not deal with low-level protocols, instead they design application-level protocols on top of the low-level protocol. Although each application-level protocol is different, there is commonality among them and developers can apply lessons learned from one protocol to the design of new ones. Design patterns can help by gathering and sharing proven and reusable solution to common, reoccurring design problems. The Application-level Communication Protocols Design Patterns language captures this knowledge about application-level protocol design, so developers can create better, more fitting protocols base on these common and well proven solutions. Another aspect of contemporary development technics is the need of distribution of software artifacts. Most of the development companies have started using Cloud Computing services to overcome this need; either public or private clouds are widely used. Future developers need to manage this technology infrastructure, software, and platform as services. These two aspects, communication protocols design and cloud computing represent an opportunity to contribute to the software development community and to the software engineering education curriculum. The Application-level Communication Protocols Design Patterns language aims to help solve communication software design. The use of cloud computing in programming assignments targets on a positive influence on improving the Analysis to Reuse skills of students of computer science careers
    corecore