7,281 research outputs found

    A Survey of Positioning Systems Using Visible LED Lights

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.As Global Positioning System (GPS) cannot provide satisfying performance in indoor environments, indoor positioning technology, which utilizes indoor wireless signals instead of GPS signals, has grown rapidly in recent years. Meanwhile, visible light communication (VLC) using light devices such as light emitting diodes (LEDs) has been deemed to be a promising candidate in the heterogeneous wireless networks that may collaborate with radio frequencies (RF) wireless networks. In particular, light-fidelity has a great potential for deployment in future indoor environments because of its high throughput and security advantages. This paper provides a comprehensive study of a novel positioning technology based on visible white LED lights, which has attracted much attention from both academia and industry. The essential characteristics and principles of this system are deeply discussed, and relevant positioning algorithms and designs are classified and elaborated. This paper undertakes a thorough investigation into current LED-based indoor positioning systems and compares their performance through many aspects, such as test environment, accuracy, and cost. It presents indoor hybrid positioning systems among VLC and other systems (e.g., inertial sensors and RF systems). We also review and classify outdoor VLC positioning applications for the first time. Finally, this paper surveys major advances as well as open issues, challenges, and future research directions in VLC positioning systems.Peer reviewe

    Fingerprinting Internet DNS Amplification DDoS Activities

    Full text link
    This work proposes a novel approach to infer and characterize Internet-scale DNS amplification DDoS attacks by leveraging the darknet space. Complementary to the pioneer work on inferring Distributed Denial of Service (DDoS) activities using darknet, this work shows that we can extract DDoS activities without relying on backscattered analysis. The aim of this work is to extract cyber security intelligence related to DNS Amplification DDoS activities such as detection period, attack duration, intensity, packet size, rate and geo-location in addition to various network-layer and flow-based insights. To achieve this task, the proposed approach exploits certain DDoS parameters to detect the attacks. We empirically evaluate the proposed approach using 720 GB of real darknet data collected from a /13 address space during a recent three months period. Our analysis reveals that the approach was successful in inferring significant DNS amplification DDoS activities including the recent prominent attack that targeted one of the largest anti-spam organizations. Moreover, the analysis disclosed the mechanism of such DNS amplification DDoS attacks. Further, the results uncover high-speed and stealthy attempts that were never previously documented. The case study of the largest DDoS attack in history lead to a better understanding of the nature and scale of this threat and can generate inferences that could contribute in detecting, preventing, assessing, mitigating and even attributing of DNS amplification DDoS activities.Comment: 5 pages, 2 figure

    When Things Matter: A Data-Centric View of the Internet of Things

    Full text link
    With the recent advances in radio-frequency identification (RFID), low-cost wireless sensor devices, and Web technologies, the Internet of Things (IoT) approach has gained momentum in connecting everyday objects to the Internet and facilitating machine-to-human and machine-to-machine communication with the physical world. While IoT offers the capability to connect and integrate both digital and physical entities, enabling a whole new class of applications and services, several significant challenges need to be addressed before these applications and services can be fully realized. A fundamental challenge centers around managing IoT data, typically produced in dynamic and volatile environments, which is not only extremely large in scale and volume, but also noisy, and continuous. This article surveys the main techniques and state-of-the-art research efforts in IoT from data-centric perspectives, including data stream processing, data storage models, complex event processing, and searching in IoT. Open research issues for IoT data management are also discussed

    Improving the accuracy of spoofed traffic inference in inter-domain traffic

    Get PDF
    Ascertaining that a network will forward spoofed traffic usually requires an active probing vantage point in that network, effectively preventing a comprehensive view of this global Internet vulnerability. We argue that broader visibility into the spoofing problem may lie in the capability to infer lack of Source Address Validation (SAV) compliance from large, heavily aggregated Internet traffic data, such as traffic observable at Internet Exchange Points (IXPs). The key idea is to use IXPs as observatories to detect spoofed packets, by leveraging Autonomous System (AS) topology knowledge extracted from Border Gateway Protocol (BGP) data to infer which source addresses should legitimately appear across parts of the IXP switch fabric. In this thesis, we demonstrate that the existing literature does not capture several fundamental challenges to this approach, including noise in BGP data sources, heuristic AS relationship inference, and idiosyncrasies in IXP interconnec- tivity fabrics. We propose Spoofer-IX, a novel methodology to navigate these challenges, leveraging Customer Cone semantics of AS relationships to guide precise classification of inter-domain traffic as In-cone, Out-of-cone ( spoofed ), Unverifiable, Bogon, and Unas- signed. We apply our methodology on extensive data analysis using real traffic data from two distinct IXPs in Brazil, a mid-size and a large-size infrastructure. In the mid-size IXP with more than 200 members, we find an upper bound volume of Out-of-cone traffic to be more than an order of magnitude less than the previous method inferred on the same data, revealing the practical importance of Customer Cone semantics in such analysis. We also found no significant improvement in deployment of SAV in networks using the mid-size IXP between 2017 and 2019. In hopes that our methods and tools generalize to use by other IXPs who want to avoid use of their infrastructure for launching spoofed-source DoS attacks, we explore the feasibility of scaling the system to larger and more diverse IXP infrastructures. To promote this goal, and broad replicability of our results, we make the source code of Spoofer-IX publicly available. This thesis illustrates the subtleties of scientific assessments of operational Internet infrastructure, and the need for a community focus on reproducing and repeating previous methods.A constatação de que uma rede encaminhará tráfego falsificado geralmente requer um ponto de vantagem ativo de medição nessa rede, impedindo efetivamente uma visão abrangente dessa vulnerabilidade global da Internet. Isto posto, argumentamos que uma visibilidade mais ampla do problema de spoofing pode estar na capacidade de inferir a falta de conformidade com as práticas de Source Address Validation (SAV) a partir de dados de tráfego da Internet altamente agregados, como o tráfego observável nos Internet Exchange Points (IXPs). A ideia chave é usar IXPs como observatórios para detectar pacotes falsificados, aproveitando o conhecimento da topologia de sistemas autônomos extraído dos dados do protocolo BGP para inferir quais endereços de origem devem aparecer legitimamente nas comunicações através da infra-estrutura de um IXP. Nesta tese, demonstramos que a literatura existente não captura diversos desafios fundamentais para essa abordagem, incluindo ruído em fontes de dados BGP, inferência heurística de relacionamento de sistemas autônomos e características específicas de interconectividade nas infraestruturas de IXPs. Propomos o Spoofer-IX, uma nova metodologia para superar esses desafios, utilizando a semântica do Customer Cone de relacionamento de sistemas autônomos para guiar com precisão a classificação de tráfego inter-domínio como In-cone, Out-of-cone ( spoofed ), Unverifiable, Bogon, e Unassigned. Aplicamos nossa metodologia em análises extensivas sobre dados reais de tráfego de dois IXPs distintos no Brasil, uma infraestrutura de médio porte e outra de grande porte. No IXP de tamanho médio, com mais de 200 membros, encontramos um limite superior do volume de tráfego Out-of-cone uma ordem de magnitude menor que o método anterior inferiu sob os mesmos dados, revelando a importância prática da semântica do Customer Cone em tal análise. Além disso, não encontramos melhorias significativas na implantação do Source Address Validation (SAV) em redes usando o IXP de tamanho médio entre 2017 e 2019. Na esperança de que nossos métodos e ferramentas sejam aplicáveis para uso por outros IXPs que desejam evitar o uso de sua infraestrutura para iniciar ataques de negação de serviço através de pacotes de origem falsificada, exploramos a viabilidade de escalar o sistema para infraestruturas IXP maiores e mais diversas. Para promover esse objetivo e a ampla replicabilidade de nossos resultados, disponibilizamos publicamente o código fonte do Spoofer-IX. Esta tese ilustra as sutilezas das avaliações científicas da infraestrutura operacional da Internet e a necessidade de um foco da comunidade na reprodução e repetição de métodos anteriores

    Advanced Network Inference Techniques Based on Network Protocol Stack Information Leaks

    Get PDF
    Side channels are channels of implicit information flow that can be used to find out information that is not allowed to flow through explicit channels. This thesis focuses on network side channels, where information flow occurs in the TCP/IP network stack implementations of operating systems. I will describe three new types of idle scans: a SYN backlog idle scan, a RST rate-limit idle scan, and a hybrid idle scan. Idle scans are special types of side channels that are designed to help someone performing a network measurement (typically an attacker or a researcher) to infer something about the network that they are not otherwise able to see from their vantage point. The thesis that this dissertation tests is this: because modern network stacks have shared resources, there is a wealth of information that can be inferred off-path by both attackers and Internet measurement researchers. With respect to attackers, no matter how carefully the security model is designed, the non-interference property is unlikely to hold, i.e., an attacker can easily find side channels of information flow to learn about the network from the perspective of the system remotely. One suggestion is that trust relationships for using resources be made explicit all the way down to IP layer with the goal of dividing resources and removing sharendess to prevent advanced network reconnaissance. With respect to Internet measurement researchers, in this dissertation I show that the information flow is rich enough to test connectivity between two arbitrary hosts on the Internet and even infer in which direction any blocking is occurring. To explore this thesis, I present three research efforts: --- First, I modeled a typical TCP/IP network stack. The building process for this modeling effort led to the discovery of two new idles scans: a SYN backlog idle scan and a RST rate-limited idle scan. The SYN backlog scan is particularly interesting because it does not require whoever is performing the measurements (i.e., the attacker or researcher) to send any packets to the victim (or target) at all. --- Second, I developed a hybrid idle scan that combines elements of the SYN backlog idle scan with Antirez\u27s original IPID-based idle scan. This scan enables researchers to test whether two arbitrary machines in the world are able to communicate via TCP/IP, and, if not, in which direction the communication is being prevented. To test the efficacy of the hybrid idle scan, I tested three different kinds of servers (Tor bridges, Tor directory servers, and normal web servers) both inside and outside China. The results were congruent with published understandings of global Internet censorship, demonstrating that the hybrid idle scan is effective. --- Third, I applied the hybrid idle scan to the difficult problem of characterizing inconsistencies in the Great Firewall of China (GFW), which is the largest firewall in the world. This effort resolved many open questions about the GFW. The result of my dissertation work is an effective method for measuring Internet censorship around the world, without requiring any kind of distributed measurement platform or access to any of the machines that connectivity is tested to or from

    Detection of Sparse Anomalies in High-Dimensional Network Telescope Signals

    Full text link
    Network operators and system administrators are increasingly overwhelmed with incessant cyber-security threats ranging from malicious network reconnaissance to attacks such as distributed denial of service and data breaches. A large number of these attacks could be prevented if the network operators were better equipped with threat intelligence information that would allow them to block or throttle nefarious scanning activities. Network telescopes or "darknets" offer a unique window into observing Internet-wide scanners and other malicious entities, and they could offer early warning signals to operators that would be critical for infrastructure protection and/or attack mitigation. A network telescope consists of unused or "dark" IP spaces that serve no users, and solely passively observes any Internet traffic destined to the "telescope sensor" in an attempt to record ubiquitous network scanners, malware that forage for vulnerable devices, and other dubious activities. Hence, monitoring network telescopes for timely detection of coordinated and heavy scanning activities is an important, albeit challenging, task. The challenges mainly arise due to the non-stationarity and the dynamic nature of Internet traffic and, more importantly, the fact that one needs to monitor high-dimensional signals (e.g., all TCP/UDP ports) to search for "sparse" anomalies. We propose statistical methods to address both challenges in an efficient and "online" manner; our work is validated both with synthetic data as well as real-world data from a large network telescope
    • …
    corecore