107 research outputs found
Recommended from our members
Remedying Security Concerns at an Internet Scale
The state of security across the Internet is poor, and it has been so since the advent of the modern Internet. While the research community has made tremendous progress over the years in learning how to design and build secure computer systems, network protocols, and algorithms, we are far from a world where we can truly trust the security of deployed Internet systems. In reality, we may never reach such a world. Security concerns continue to be identified at scale through-out the software ecosystem, with thousands of vulnerabilities discovered each year. Meanwhile, attacks have become ever more frequent and consequential.As Internet systems will continue to be inevitably affected by newly found security concerns, the research community must develop more effective ways to remedy these issues. To that end, in this dissertation, we conduct extensive empirical measurements to understand how remediation occurs in practice for Internet systems, and explore methods for spurring improved remediation behavior. This dissertation provides a treatment of the complete remediation life cycle, investigating the creation, dissemination, and deployment of remedies. We start by focusing on security patches that address vulnerabilities, and analyze at scale their creation process, characteristics of the resulting fixes, and how these impact vulnerability remediation. We then investigate and systematize how administrators of Internet systems deploy software updates which patch vulnerabilities across the many machines they manage on behalf of organizations. Finally, we conduct the first systematic exploration of Internet-scale outreach efforts to disseminate information about security concerns and their remedies to system administrators, with an aim of driving their remediation decisions. Our results show that such outreach campaigns can effectively galvanize positive reactions.Improving remediation, particularly at scale, is challenging, as the problem space exhibits many dimensions beyond traditional computer technical considerations, including human, social, organizational, economic, and policy facets. To make meaningful progress, this work uses a diversity of empirical methods, from software data mining to user studies to Internet-wide network measurements, to systematically collect and evaluate large-scale datasets. Ultimately, this dissertation establishes broad empirical grounding on security remediation in practice today, as well as new approaches for improved remediation at an Internet scale
Alternative revenue sources for Internet service providers
The Internet has evolved from a small research network towards a large globally interconnected network. The deregulation of the Internet attracted commercial entities to provide various network and application services for profit. While Internet Service
Providers (ISPs) offer network connectivity services, Content Service Providers (CSPs) offer online contents and application services. Further, the ISPs that provide transit services to other ISPs and CSPs are known as transit ISPs. The ISPs that provide Internet connections to end users are known as access ISPs. Though without a central regulatory body for governing, the Internet is growing through complex economic cooperation between service providers that also compete with each other for revenues. Currently, CSPs derive high revenues from online advertising that increase with content popularity. On other hand, ISPs face low transit revenues, caused by persistent declines in per-unit traffic prices, and rising network costs fueled by increasing traffic volumes.
In this thesis, we analyze various approaches by ISPs for sustaining their network infrastructures by earning extra revenues. First, we study the economics of traffic attraction by ISPs to boost transit revenues. This study demonstrates that traffic attraction and reaction to it redistribute traffic on links between Autonomous Systems (ASes) and create camps of winning, losing and neutral ASes with respect to changes in transit payments.
Despite various countermeasures by losing ASes, the traffic attraction remains effective unless ASes from the winning camp cooperate with the losing ASes. While our study shows that traffic attraction has a solid potential to increase revenues for transit ISPs, this source of revenues might have negative reputation and legal consequences for the ISPs. Next, we look at hosting as an alternative source of revenues and examine hosting of online contents by transit ISPs. Using real Internet-scale measurements, this work reports a pervasive trend of content hosting throughout the transit hierarchy, validating the hosting as a prominent source of revenues for transit ISPs. In our final work, we consider a model where access ISPs derive extra revenues from online advertisements (ads).
Our analysis demonstrates that the ad-based revenue model opens a significant revenue potential for access ISPs, suggesting its economic viability.This work has been supported by IMDEA Networks Institute.Programa Oficial de Doctorado en IngenierĂa TelemĂĄticaPresidente: Jordi Domingo-Pascual.- Vocal: VĂctor LĂłpez Ălvarez.-Secretario: Alberto GarcĂa MartĂne
DNS in Computer Forensics
The Domain Name Service (DNS) is a critical core component of the global Internet and integral to the majority of corporate intranets. It provides resolution services between the human-readable name-based system addresses and the machine operable Internet Protocol (IP) based addresses required for creating network level connections. Whilst structured as a globally dispersed resilient tree data structure, from the Global and Country Code Top Level Domains (gTLD/ccTLD) down to the individual site and system leaf nodes, it is highly resilient although vulnerable to various attacks, exploits and systematic failures
Monitoring Internet censorship: the case of UBICA
As a consequence of the recent debate about restrictions in the access to content on the Internet, a strong motivation has arisen for censorship monitoring: an independent, publicly available and global watch on Internet censorship activities is a necessary goal to be pursued in order to guard citizens' right of access to information.
Several techniques to enforce censorship on the Internet are known in literature, differing in terms of transparency towards the user, selectivity in blocking specific resources or whole groups of services, collateral effects outside the administrative borders of their intended application.
Monitoring censorship is also complicated by the dynamic nature of multiple aspects of this phenomenon, the number and diversity of resources targeted by censorship and its global scale.
In the present Thesis an analysis of literature on internet censorship and available solutions for censorship detection has been performed, characterizing censorship enforcement techniques and censorship detection techniques and tools.
The available platforms and tools for censorship detection have been found falling short of providing a comprehensive monitoring platform able to manage a diverse set of measurement vantage points and a reporting interface continuously updated with the results of automated censorship analysis.
The candidate proposes a design of such a platform, UBICA, along with a prototypical implementation whose effectiveness has been experimentally validated in global monitoring campaigns.
The results of the validation are discussed, confirming the effectiveness of the proposed design and suggesting future enhancements and research
Recommended from our members
Understanding Flaws in the Deployment and Implementation of Web Encryption
In recent years, the web has switched from using the unencrypted HTTP protocol to using encrypted communications. Primarily, this resulted in increasing deployment of TLS to mitigate information leakage over the network. This development has led many web service operators to mistakenly think that migrating from HTTP to HTTPS will magically protect them from information leakage without any additional effort on their end to guar- antee the desired security properties. In reality, despite the fact that there exists enough infrastructure in place and the protocols have been âtestedâ (by virtue of being in wide, but not ubiquitous, use for many years), deploying HTTPS is a highly challenging task due to the technical complexity of its underlying protocols (i.e., HTTP, TLS) as well as the complexity of the TLS certificate ecosystem and this of popular client applications such as web browsers. For example, we found that many websites still avoid ubiquitous encryption and force only critical functionality and sensitive data access over encrypted connections while allowing more innocuous functionality to be accessed over HTTP. In practice, this approach is prone to flaws that can expose sensitive information or functionality to third parties. Thus, it is crucial for developers to verify the correctness of their deployments and implementations.
In this dissertation, in an effort to improve usersâ privacy, we highlight semantic flaws in the implementations of both web servers and clients, caused by the improper deployment of web encryption protocols. First, we conduct an in-depth assessment of major websites and explore what functionality and information is exposed to attackers that have hijacked a userâs HTTP cookies. We identify a recurring pattern across websites with partially de- ployed HTTPS, namely, that service personalization inadvertently results in the exposure of private information. The separation of functionality across multiple cookies with different scopes and inter-dependencies further complicates matters, as imprecise access control renders restricted account functionality accessible to non-secure cookies. Our cookie hijacking study reveals a number of severe flaws; for example, attackers can obtain the userâs saved address and visited websites from e.g., Google, Bing, and Yahoo allow attackers to extract the contact list and send emails from the userâs account. To estimate the extent of the threat, we run measurements on a university public wireless network for a period of 30 days and detect over 282K accounts exposing the cookies required for our hijacking attacks.
Next, we explore and study security mechanisms purposed to eliminate this problem by enforcing encryption such as HSTS and HTTPS Everywhere. We evaluate each mechanism in terms of its adoption and effectiveness. We find that all mechanisms suffer from implementation flaws or deployment issues and argue that, as long as servers continue to not support ubiquitous encryption across their entire domain, no mechanism can effectively protect users from cookie hijacking and information leakage.
Finally, as the security guarantees of TLS (in turn HTTPS), are critically dependent on the correct validation of X.509 server certificates, we study hostname verification, a critical component in the certificate validation process. We develop HVLearn, a novel testing framework to verify the correctness of hostname verification implementations and use HVLearn to analyze a number of popular TLS libraries and applications. To this end, we found 8 unique violations of the RFC specifications. Several of these violations are critical and can render the affected implementations vulnerable to man-in-the-middle attacks
Code-injection Verwundbarkeiten in Web Anwendungen am Beispiel von Cross-site Scripting
The majority of all security problems in today's Web applications is caused by string-based code injection, with Cross-site Scripting (XSS)being the dominant representative of this vulnerability class. This thesis discusses XSS and suggests defense mechanisms. We do so in three stages: First, we conduct a thorough analysis of JavaScript's capabilities and explain how these capabilities are utilized in XSS attacks. We subsequently design a systematic, hierarchical classification of XSS payloads. In addition, we present a comprehensive survey of publicly documented XSS payloads which is structured according to our proposed classification scheme. Secondly, we explore defensive mechanisms which dynamically prevent the execution of some payload types without eliminating the actual vulnerability. More specifically, we discuss the design and implementation of countermeasures against the XSS payloads Session Hijacking'', Cross-site Request Forgery'', and attacks that target intranet resources. We build upon this and introduce a general methodology for developing such countermeasures: We determine a necessary set of basic capabilities an adversary needs for successfully executing an attack through an analysis of the targeted payload type. The resulting countermeasure relies on revoking one of these capabilities, which in turn renders the payload infeasible. Finally, we present two language-based approaches that prevent XSS and related vulnerabilities: We identify the implicit mixing of data and code during string-based syntax assembly as the root cause of string-based code injection attacks. Consequently, we explore data/code separation in web applications. For this purpose, we propose a novel methodology for token-level data/code partitioning of a computer language's syntactical elements. This forms the basis for our two distinct techniques: For one, we present an approach to detect data/code confusion on run-time and demonstrate how this can be used for attack prevention. Furthermore, we show how vulnerabilities can be avoided through altering the underlying programming language. We introduce a dedicated datatype for syntax assembly instead of using string datatypes themselves for this purpose. We develop a formal, type-theoretical model of the proposed datatype and proof that it provides reliable separation between data and code hence, preventing code injection vulnerabilities. We verify our approach's applicability utilizing a practical implementation for the J2EE application server.Cross-site Scripting (XSS) ist eine der hĂ€ufigsten Verwundbarkeitstypen im Bereich der Web Anwendungen. Die Dissertation behandelt das Problem XSS ganzheitlich: Basierend auf einer systematischen Erarbeitung der Ursachen und potentiellen Konsequenzen von XSS, sowie einer umfassenden Klassifikation dokumentier Angriffsarten, wird zunĂ€chst eine Methodik vorgestellt, die das Design von dynamischen GegenmaĂnahmen zur Angriffseingrenzung erlaubt. Unter Verwendung dieser Methodik wird das Design und die Evaluation von drei GegemaĂnahmen fĂŒr die Angriffsunterklassen "Session Hijacking", "Cross-site Request Forgery" und "Angriffe auf das Intranet" vorgestellt. Weiterhin, um das unterliegende Problem grundsĂ€tzlich anzugehen, wird ein Typ-basierter Ansatz zur sicheren Programmierung von Web Anwendungen beschrieben, der zuverlĂ€ssigen Schutz vor XSS LĂŒcken garantiert
Recommended from our members
A survey on security issues and solutions at different layers of Cloud computing
Cloud computing offers scalable on-demand services to consumers with greater flexibility and lesser infrastructure investment. Since Cloud services are delivered using classical network protocols and formats over the Internet, implicit vulnerabilities existing in these protocols as well as threats introduced by newer architectures raise many security and privacy concerns. In this paper, we survey the factors affecting Cloud computing adoption, vulnerabilities and attacks, and identify relevant solution directives to strengthen security and privacy in the Cloud environment
- âŠ