255 research outputs found

    Efficient Batch Update of Unique Identifiers in a Distributed Hash Table for Resources in a Mobile Host

    Full text link
    Resources in a distributed system can be identified using identifiers based on random numbers. When using a distributed hash table to resolve such identifiers to network locations, the straightforward approach is to store the network location directly in the hash table entry associated with an identifier. When a mobile host contains a large number of resources, this requires that all of the associated hash table entries must be updated when its network address changes. We propose an alternative approach where we store a host identifier in the entry associated with a resource identifier and the actual network address of the host in a separate host entry. This can drastically reduce the time required for updating the distributed hash table when a mobile host changes its network address. We also investigate under which circumstances our approach should or should not be used. We evaluate and confirm the usefulness of our approach with experiments run on top of OpenDHT.Comment: To be presented at the 2010 International Workshop on Cloud Computing, Applications and Technologie

    Secure Geographic Routing in Ad Hoc and Wireless Sensor Networks

    Get PDF
    Security in sensor networks is one of the most relevant research topics in resource constrained wireless devices and networks. Several attacks can be suffered in ad hoc and wireless sensor networks (WSN), which are highly susceptible to attacks, due to the limited resources of the nodes. In this paper, we propose innovative and lightweight localization techniques that allow for intrusion identification and isolation schemes and provide accurate location information. This information is used by our routing protocol which additionally incorporates a distributed trust model to prevent several routing attacks to the network. We finally evaluate our algorithms for accurate localization and for secure routing which have been implemented and tested in real ad hoc and wireless sensor networks

    Architectures for Future Media Internet

    Get PDF
    Among the major reasons for the success of the Internet have been the simple networking architecture and the IP interoperation layer. However, the traffic model has recently changed. More and more applications (e.g. peerto-peer, content delivery networks) target on the content that they deliver rather than on the addresses of the servers who (originally) published/hosted that content. This trend has motivated a number of content-oriented networking studies. In this paper we summarize some the most important approache

    Towards Understanding First-Party Cookie Tracking in the Field

    Get PDF
    Third-party tracking is a common and broadly used technique on the Web. Different defense mechanisms have emerged to counter these practices (e. g. browser vendors that ban all third-party cookies). However, these countermeasures only target third-party trackers and ignore the first party because the narrative is that such monitoring is mostly used to improve the utilized service (e.g. analytical services). In this paper, we present a large-scale measurement study that analyzes tracking performed by the first party but utilized by a third party to circumvent standard tracking preventing techniques. We visit the top 15,000 websites to analyze first-party cookies used to track users and a technique called “DNS CNAME cloaking”, which can be used by a third party to place first-party cookies. Using this data, we show that 76% of sites effectively utilize such tracking techniques. In a long-running analysis, we show that the usage of such cookies increased by more than 50% over 2021

    DNS to the rescue: Discerning Content and Services in a Tangled Web

    Get PDF
    A careful perusal of the Internet evolution reveals two major trends - explosion of cloud-based services and video stream- ing applications. In both of the above cases, the owner (e.g., CNN, YouTube, or Zynga) of the content and the organiza- tion serving it (e.g., Akamai, Limelight, or Amazon EC2) are decoupled, thus making it harder to understand the asso- ciation between the content, owner, and the host where the content resides. This has created a tangled world wide web that is very hard to unwind, impairing ISPs' and network ad- ministrators' capabilities to control the traffic flowing on the network. In this paper, we present DN-Hunter, a system that lever- ages the information provided by DNS traffic to discern the tangle. Parsing through DNS queries, DN-Hunter tags traffic flows with the associated domain name. This association has several applications and reveals a large amount of useful in- formation: (i) Provides a fine-grained traffic visibility even when the traffic is encrypted (i.e., TLS/SSL flows), thus en- abling more effective policy controls, (ii) Identifies flows even before the flows begin, thus providing superior net- work management capabilities to administrators, (iii) Un- derstand and track (over time) different CDNs and cloud providers that host content for a particular resource, (iv) Discern all the services/content hosted by a given CDN or cloud provider in a particular geography and time, and (v) Provides insights into all applications/services running on any given layer-4 port number. We conduct extensive experimental analysis and show that the results from real traffic traces, ranging from FTTH to 4G ISPs, that support our hypothesis. Simply put, the informa- tion provided by DNS traffic is one of the key components required to unveil the tangled web, and bring the capabilities of controlling the traffic back to the network carrier

    Semantic-free referencing in linked systems

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.Includes bibliographical references (p. 43-45).The Web relies on the Domain Name System (DNS) to resolve the hostname portion of URLs into IP addresses. This marriage-of-convenience enabled the Web's meteoric rise, but the resulting entanglement is now hindering both infrastructures--the Web is overly constrained by the limitations of DNS, and DNS is unduly burdened by the demands of the Web. There has been much commentary on this sad state-of-affairs, but dissolving the ill-fated union between DNS and the Web requires a new way to resolve Web references. To this end, this thesis describes the design and implementation of Semantic Free Referencing (SFR), a reference resolution infrastructure based on distributed hash tables (DHTs).by Michael Walfish.S.M

    Dial N for NXDomain: The Scale, Origin, and Security Implications of DNS Queries to Non-Existent Domains

    Get PDF
    Non-Existent Domain (NXDomain) is one type of the Domain Name System (DNS) error responses, indicating that the queried domain name does not exist and cannot be resolved. Unfortunately, little research has focused on understanding why and how NXDomain responses are generated, utilized, and exploited. In this paper, we conduct the first comprehensive and systematic study on NXDomain by investigating its scale, origin, and security implications. Utilizing a large-scale passive DNS database, we identify 146,363,745,785 NXDomains queried by DNS users between 2014 and 2022. Within these 146 billion NXDomains, 91 million of them hold historic WHOIS records, of which 5.3 million are identified as malicious domains including about 2.4 million blocklisted domains, 2.8 million DGA (Domain Generation Algorithms) based domains, and 90 thousand squatting domains targeting popular domains. To gain more insights into the usage patterns and security risks of NXDomains, we register 19 carefully selected NXDomains in the DNS database, each of which received more than ten thousand DNS queries per month. We then deploy a honeypot for our registered domains and collect 5,925,311 incoming queries for 6 months, from which we discover that 5,186,858 and 505,238 queries are generated from automated processes and web crawlers, respectively. Finally, we perform extensive traffic analysis on our collected data and reveal that NXDomains can be misused for various purposes, including botnet takeover, malicious file injection, and residue trust exploitation

    Policy-agnostic programming on the client-side

    Get PDF
    Browser security has become a major concern especially due to web pages becoming more complex. These web applications handle a lot of information, including sensitive data that may be vulnerable to attacks like data exfiltration, cross-site scripting (XSS), etc. Most modern browsers have security mechanisms in place to prevent such attacks but they still fall short in preventing more advanced attacks like evolved variants of data exfiltration. Moreover, there is no standard that is followed to implement security into the browser. A lot of research has been done in the field of information flow security that could prove to be helpful in solving the problem of securing the client-side. Policy- agnostic programming is a programming paradigm that aims to make implementation of information flow security in real world systems more flexible. In this paper, we explore the use of policy-agnostic programming on the client-side and how it will help prevent common client-side attacks. We verify our results through a client-side salary management application. We show a possible attack and how our solution would prevent such an attack

    Understanding malware autostart techniques with web data extraction

    Get PDF
    The purpose of this study was to investigate automatic execution methods in Windows operating systems, as used and abused by malware. Using data extracted from the Web, information on over 10,000 malware specimens was collected and analyzed, and trends were discovered and presented. Correlations were found between these records and a list of known autostart locations for various versions of Windows. All programming was written in PHP, which proved very effective. A full breakdown of the popularity of each method per year was constructed. It was found that the popularity of many methods has varied greatly over the last decade, mostly following operating system releases and security improvements, but with some frightening exceptions
    • …
    corecore