28 research outputs found

    Hyp3rArmor: reducing web application exposure to automated attacks

    Full text link
    Web applications (webapps) are subjected constantly to automated, opportunistic attacks from autonomous robots (bots) engaged in reconnaissance to discover victims that may be vulnerable to specific exploits. This is a typical behavior found in botnet recruitment, worm propagation, largescale fingerprinting and vulnerability scanners. Most anti-bot techniques are deployed at the application layer, thus leaving the network stack of the webapp’s server exposed. In this paper we present a mechanism called Hyp3rArmor, that addresses this vulnerability by minimizing the webapp’s attack surface exposed to automated opportunistic attackers, for JavaScriptenabled web browser clients. Our solution uses port knocking to eliminate the webapp’s visible network footprint. Clients of the webapp are directed to a visible static web server to obtain JavaScript that authenticates the client to the webapp server (using port knocking) before making any requests to the webapp. Our implementation of Hyp3rArmor, which is compatible with all webapp architectures, has been deployed and used to defend single and multi-page websites on the Internet for 114 days. During this time period the static web server observed 964 attempted attacks that were deflected from the webapp, which was only accessed by authenticated clients. Our evaluation shows that in most cases client-side overheads were negligible and that server-side overheads were minimal. Hyp3rArmor is ideal for critical systems and legacy applications that must be accessible on the Internet. Additionally Hyp3rArmor is composable with other security tools, adding an additional layer to a defense in depth approach.This work has been supported by the National Science Foundation (NSF) awards #1430145, #1414119, and #1012798

    Towards secure web browsing on mobile devices

    Get PDF
    The Web is increasingly being accessed by portable, multi-touch wireless devices. Despite the popularity of platform-specific (native) mobile apps, a recent study of smartphone usage shows that more people (81%) browse the Web than use native apps (68%) on their phone. Moreover, many popular native apps such as BBC depend on browser-like components (e.g., Webview) for their functionality. The popularity and prevalence of web browsers on modern mobile phones warrants characterizing existing and emerging threats to mobile web browsing, and building solutions for the same. Although a range of studies have focused on the security of native apps on mobile devices, efforts in characterizing the security of web transactions originating at mobile browsers are limited. This dissertation presents three main contributions: First, we show that porting browsers to mobile platforms leads to new vulnerabilities previously not observed in desktop browsers. The solutions to these vulnerabilities require careful balancing between usability and security and might not always be equivalent to those in desktop browsers. Second, we empirically demonstrate that the combination of reduced screen space and an independent selection of security indicators not only make it difficult for experts to determine the security standing of mobile browsers, but actually make mobile browsing more dangerous for average users as they provide a false sense of security. Finally, we experimentally demonstrate the need for mobile specific techniques to detect malicious webpages. We then design and implement kAYO, the first mobile specific static tool to detect malicious webpages in real-time.Ph.D

    Security in peer-to-peer communication systems

    Get PDF
    P2PSIP (Peer-to-Peer Session Initiation Protocol) is a protocol developed by the IETF (Internet Engineering Task Force) for the establishment, completion and modi¿cation of communication sessions that emerges as a complement to SIP (Session Initiation Protocol) in environments where the original SIP protocol may fail for technical, ¿nancial, security, or social reasons. In order to do so, P2PSIP systems replace all the architecture of servers of the original SIP systems used for the registration and location of users, by a structured P2P network that distributes these functions among all the user agents that are part of the system. This new architecture, as with any emerging system, presents a completely new security problematic which analysis, subject of this thesis, is of crucial importance for its secure development and future standardization. Starting with a study of the state of the art in network security and continuing with more speci¿c systems such as SIP and P2P, we identify the most important security services within the architecture of a P2PSIP communication system: access control, bootstrap, routing, storage and communication. Once the security services have been identi¿ed, we conduct an analysis of the attacks that can a¿ect each of them, as well as a study of the existing countermeasures that can be used to prevent or mitigate these attacks. Based on the presented attacks and the weaknesses found in the existing measures to prevent them, we design speci¿c solutions to improve the security of P2PSIP communication systems. To this end, we focus on the service that stands as the cornerstone of P2PSIP communication systems¿ security: access control. Among the new designed solutions stand out: a certi¿cation model based on the segregation of the identity of users and nodes, a model for secure access control for on-the-¿y P2PSIP systems and an authorization framework for P2PSIP systems built on the recently published Internet Attribute Certi¿cate Pro¿le for Authorization. Finally, based on the existing measures and the new solutions designed, we de¿ne a set of security recommendations that should be considered for the design, implementation and maintenance of P2PSIP communication systems.Postprint (published version

    Combating Attacks and Abuse in Large Online Communities

    Get PDF
    Internet users today are connected more widely and ubiquitously than ever before. As a result, various online communities are formed, ranging from online social networks (Facebook, Twitter), to mobile communities (Foursquare, Waze), to content/interests based networks (Wikipedia, Yelp, Quora). While users are benefiting from the ease of access to information and social interactions, there is a growing concern for users' security and privacy against various attacks such as spam, phishing, malware infection and identity theft. Combating attacks and abuse in online communities is challenging. First, today’s online communities are increasingly dependent on users and user-generated content. Securing online systems demands a deep understanding of the complex and often unpredictable human behaviors. Second, online communities can easily have millions or even billions of users, which requires the corresponding security mechanisms to be highly scalable. Finally, cybercriminals are constantly evolving to launch new types of attacks. This further demands high robustness of security defenses. In this thesis, we take concrete steps towards measuring, understanding, and defending against attacks and abuse in online communities. We begin with a series of empirical measurements to understand user behaviors in different online services and the uniquesecurity and privacy challenges that users are facing with. This effort covers a broad set of popular online services including social networks for question and answering (Quora), anonymous social networks (Whisper), and crowdsourced mobile communities (Waze). Despite the differences of specific online communities, our study provides a first look at their user activity patterns based on empirical data, and reveals the need for reliable mechanisms to curate user content, protect privacy, and defend against emerging attacks. Next, we turn our attention to attacks targeting online communities, with focus on spam campaigns. While traditional spam is mostly generated by automated software, attackers today start to introduce "human intelligence" to implement attacks. This is maliciouscrowdsourcing (or crowdturfing) where a large group of real-users are organized to carry out malicious campaigns, such as writing fake reviews or spreading rumors on social media. Using collective human efforts, attackers can easily bypass many existing defenses (e.g.,CAPTCHA). To understand the ecosystem of crowdturfing, we first use measurements to examine their detailed campaign organization, workers and revenue. Based on insights from empirical data, we develop effective machine learning classifiers to detect crowdturfingactivities. In the meantime, considering the adversarial nature of crowdturfing, we also build practical adversarial models to simulate how attackers can evade or disrupt machine learning based defenses. To aid in this effort, we next explore using user behavior models to detect a wider range of attacks. Instead of making assumptions about attacker behavior, our idea is to model normal user behaviors and capture (malicious) behaviors that are deviated from norm. In this way, we can detect previously unknown attacks. Our behavior model is based on detailed clickstream data, which are sequences of click events generated by users when using the service. We build a similarity graph where each user is a node and the edges are weightedby clickstream similarity. By partitioning this graph, we obtain "clusters" of users with similar behaviors. We then use a small set of known good users to "color" these clusters to differentiate the malicious ones. This technique has been adopted by real-world social networks (Renren and LinkedIn), and already detected unexpected attacks. Finally, we extend clickstream model to understanding more-grained behaviors of attackers (and real users), and tracking how user behavior changes over time. In summary, this thesis illustrates a data-driven approach to understanding and defending against attacks and abuse in online communities. Our measurements have revealed new insights about how attackers are evolving to bypass existing security defenses today. Inaddition, our data-driven systems provide new solutions for online services to gain a deep understanding of their users, and defend them from emerging attacks and abuse

    Defending networked resources against floods of unwelcome requests

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2008.Includes bibliographical references (p. 172-189).The Internet is afflicted by "unwelcome requests'" defined broadly as spurious claims on scarce resources. For example, the CPU and other resources at a server are targets of denial-of-service (DOS) attacks. Another example is spam (i.e., unsolicited bulk email); here, the resource is human attention. Absent any defense, a very small number of attackers can claim a very large fraction of the scarce resources. Traditional responses identify "bad" requests based on content (for example, spam filters analyze email text and embedded URLs). We argue that such approaches are inherently gameable because motivated attackers can make "bad" requests look "good". Instead, defenses should aim to allocate resources proportionally (so if lo% of the requesters are "bad", they should be limited to lo% of the scarce resources). To meet this goal, we present the design, implementation, analysis, and experimental evaluation of two systems. The first, speak-up, defends servers against application-level denial-of-service by encouraging all clients to automatically send more traffic. The "good" clients can thereby compete equally with the "bad" ones. Experiments with an implementation of speak-up indicate that it allocates a server's resources in rough proportion to clients' upload bandwidths, which is the intended result. The second system, DQE, controls spam with per-sender email quotas. Under DQE, senders attach stamps to emails. Receivers communicate with a well-known, untrusted enforcer to verify that stamps are fresh and to cancel stamps to prevent reuse. The enforcer is distributed over multiple hosts and is designed to tolerate arbitrary faults in these hosts, resist various attacks, and handle hundreds of billions of messages daily (two or three million stamp checks per second). Our experimental results suggest that our implementation can meet these goals with only a few thousand PCs.(cont) The enforcer occupies a novel design point: a set of hosts implement a simple storage abstraction but avoid neighbor maintenance, replica maintenance, and mutual trust. One connection between these systems is that DQE needs a DoS defense-and can use speak-up. We reflect on this connection, on why we apply speak-up to DoS and DQE to spam, and, more generally, on what problems call for which solutions.by Michael Walfish.Ph.D

    Improving Dependability of Networks with Penalty and Revocation Mechanisms

    Get PDF
    Both malicious and non-malicious faults can dismantle computer networks. Thus, mitigating faults at various layers is essential in ensuring efficient and fair network resource utilization. In this thesis we take a step in this direction and study several ways to deal with faults by means of penalties and revocation mechanisms in networks that are lacking a centralized coordination point, either because of their scale or design. Compromised nodes can pose a serious threat to infrastructure, end-hosts and services. Such malicious elements can undermine the availability and fairness of networked systems. To deal with such nodes, we design and analyze protocols enabling their removal from the network in a fast and a secure way. We design these protocols for two different environments. In the former setting, we assume that there are multiple, but independent trusted points in the network which coordinate other nodes in the network. In the latter, we assume that all nodes play equal roles in the network and thus need to cooperate to carry out common functionality. We analyze these solutions and discuss possible deployment scenarios. Next we turn our attention to wireless edge networks. In this context, some nodes, without being malicious, can still behave in an unfair manner. To deal with the situation, we propose several self-penalty mechanisms. We implement the proposed protocols employing a commodity hardware and conduct experiments in real-world environments. The analysis of data collected in several measurement rounds revealed improvements in terms of higher fairness and throughput. We corroborate the results with simulations and an analytic model. And finally, we discuss how to measure fairness in dynamic settings, where nodes can have heterogeneous resource demands

    From Understanding Telephone Scams to Implementing Authenticated Caller ID Transmission

    Get PDF
    abstract: The telephone network is used by almost every person in the modern world. With the rise of Internet access to the PSTN, the telephone network today is rife with telephone spam and scams. Spam calls are significant annoyances for telephone users, unlike email spam, spam calls demand immediate attention. They are not only significant annoyances but also result in significant financial losses in the economy. According to complaint data from the FTC, complaints on illegal calls have made record numbers in recent years. Americans lose billions to fraud due to malicious telephone communication, despite various efforts to subdue telephone spam, scam, and robocalls. In this dissertation, a study of what causes the users to fall victim to telephone scams is presented, and it demonstrates that impersonation is at the heart of the problem. Most solutions today primarily rely on gathering offending caller IDs, however, they do not work effectively when the caller ID has been spoofed. Due to a lack of authentication in the PSTN caller ID transmission scheme, fraudsters can manipulate the caller ID to impersonate a trusted entity and further a variety of scams. To provide a solution to this fundamental problem, a novel architecture and method to authenticate the transmission of the caller ID is proposed. The solution enables the possibility of a security indicator which can provide an early warning to help users stay vigilant against telephone impersonation scams, as well as provide a foundation for existing and future defenses to stop unwanted telephone communication based on the caller ID information.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Understanding emerging client-Side web vulnerabilities using dynamic program analysis

    Get PDF
    Today's Web heavily relies on JavaScript as it is the main driving force behind the plethora of Web applications that we enjoy daily. The complexity and amount of this client-side code have been steadily increasing over the years. At the same time, new vulnerabilities keep being uncovered, for which we mostly rely on manual analysis of security experts. Unfortunately, such manual efforts do not scale to the problem space at hand. Therefore in this thesis, we present techniques capable of finding vulnerabilities automatically and at scale that originate from malicious inputs to postMessage handlers, polluted prototypes, and client-side storage mechanisms. Our results highlight that the investigated vulnerabilities are prevalent even among the most popular sites, showing the need for automated systems that help developers uncover them in a timely manner. Using the insights gained during our empirical studies, we provide recommendations for developers and browser vendors to tackle the underlying problems in the future. Furthermore, we show that security mechanisms designed to mitigate such and similar issues cannot currently be deployed by first-party applications due to their reliance on third-party functionality. This leaves developers in a no-win situation, in which either functionality can be preserved or security enforced.JavaScript ist die treibende Kraft hinter all den Web Applikationen, die wir heutzutage täglich nutzen. Allerdings ist über die Zeit hinweg gesehen die Masse, aber auch die Komplexität, von Client-seitigem JavaScript Code stetig gestiegen. Außerdem finden Sicherheitsexperten immer wieder neue Arten von Verwundbarkeiten, meistens durch manuelle Analyse des Codes. In diesem Werk untersuchen wir deshalb Methodiken, mit denen wir automatisch Verwundbarkeiten finden können, die von postMessages, veränderten Prototypen, oder Werten aus Client-seitigen Persistenzmechnanismen stammen. Unsere Ergebnisse zeigen, dass die untersuchten Schwachstellen selbst unter den populärsten Websites weit verbreitet sind, was den Bedarf an automatisierten Systemen zeigt, die Entwickler bei der rechtzeitigen Aufdeckung dieser Schwachstellen unterstützen. Anhand der in unseren empirischen Studien gewonnenen Erkenntnissen geben wir Empfehlungen für Entwickler und Browser-Anbieter, um die zugrunde liegenden Probleme in Zukunft anzugehen. Zudem zeigen wir auf, dass Sicherheitsmechanismen, die solche und ähnliche Probleme mitigieren sollen, derzeit nicht von Seitenbetreibern eingesetzt werden können, da sie auf die Funktionalität von Drittanbietern angewiesen sind. Dies zwingt den Seitenbetreiber dazu, zwischen Funktionalität und Sicherheit zu wählen

    Hardening Tor Hidden Services

    Get PDF
    Tor is an overlay anonymization network that provides anonymity for clients surfing the web but also allows hosting anonymous services called hidden services. These enable whistleblowers and political activists to express their opinion and resist censorship. Administrating a hidden service is not trivial and requires extensive knowledge because Tor uses a comprehensive protocol and relies on volunteers. Meanwhile, attackers can spend significant resources to decloak them. This thesis aims to improve the security of hidden services by providing practical guidelines and a theoretical architecture. First, vulnerabilities specific to hidden services are analyzed by conducting an academic literature review. To model realistic real-world attackers, court documents are analyzed to determine their procedures. Both literature reviews classify the identified vulnerabilities into general categories. Afterward, a risk assessment process is introduced, and existing risks for hidden services and their operators are determined. The main contributions of this thesis are practical guidelines for hidden service operators and a theoretical architecture. The former provides operators with a good overview of practices to mitigate attacks. The latter is a comprehensive infrastructure that significantly increases the security of hidden services and alleviates problems in the Tor protocol. Afterward, limitations and the transfer into practice are analyzed. Finally, future research possibilities are determined
    corecore