11 research outputs found
Towards a Theory of Trust in Networks of Humans and Computers (CMU-CyLab-11-016)
We argue that a general theory of trust in networks of humans and computers must be build on both a theory of behavioral trust and a theory ofcomputational trust. This argument is motivated by increased participation of people in social networking, crowdsourcing, human computation, and socio-economic protocols, e.g., protocols modeled by trust and gift-exchange games [3, 10, 11], norms-establishing contracts [1], and scams [6, 35, 33]. User participation in these protocols relies primarily on trust, since on-line verification of protocol compliance is often impractical; e.g., verification can lead to undecidable problems, co-NP complete test procedures, and user inconvenience. Trust is captured by participant preferences (i.e., risk and betrayal aversion) and beliefs in the trustworthiness of other protocol participants [11, 10]. Both preferences and beliefs can be enhanced whenever protocol non-compliance leads to punishment of untrustworthy participants [11, 23]; i.e., it seems natural that betrayal aversion can be decreased and belief in trustworthiness increased by properly defined punishment [1]. We argue that a general theory of trust should focus on the establishment of new trust relations where none were possible before. This focus would help create new economic opportunities by increasing the pool of usable services, removing cooperation barriers among users, and at the very least, taking advantage of “network effects." Hence a new theory of trust would also help focus security research in areas that promote trust-enhancement in infrastructures in human and computer networks. Finally, we argue that a general theory of trust should mirror, to the largest possible extent, human expectations and mental models of trust without relying on false metaphors and analogies with the physical world.</p
FLoc: Dependable Link Access for Legitimate Traffic in Flooding Attacks (CMU-CyLab-11-019)
Malware-contaminated hosts organized as a "bot network" can target and flood network links (e.g., routers). Yet, none of the countermeasures to link flooding proposed to date have provided dependable link access (i.e., bandwidth guarantees) for legitimate traffic during such attacks. In this paper, we present a router subsystem called FLoc (Flow Localization) that confines attack effects and provides differential bandwidth guarantees at a congested link: (1) packet flows of uncontaminated domains (i.e., Autonomous Systems) receive better bandwidth guarantees than packet flows of contaminated ones; and (2) legitimate flows of contaminated domains are guaranteed substantially higher bandwidth than attack flows. FLoc employs new preferential packet-drop and traffic-aggregation policies that limit "collateral damage" and protect legitimate flows from a wide variety of flooding attacks. We present FLoc’s analytical model for dependable link access, a router design based on it, and illustrate FLoc’s effectiveness using simulations of different flooding strategies and comparisons with other flooding defense schemes. Internet-scale simulation results corroborate FLoc’s effectiveness in the face of large-scale attacks in the real Internet.</p
Routing Bottlenecks in the Internet – Causes, Exploits, and Countermeasures (CMU-CyLab-14-010)
<p>How pervasive is the vulnerability to linkflooding attacks that degrade connectivity of thousands of Internet hosts? Are some network topologies and geographic regions more vulnerable than others? Do practical countermeasures exist? To answer these questions, we introduce the notion of the routing bottlenecks and show that it is a fundamental property of Internet design; i.e., it is a consequence of route-cost minimizations. We illustrate the pervasiveness of routing bottlenecks in an experiment comprising 15 countries and 15 cities distributed around the world, and measure their susceptibility to linkflooding attacks. We present the key characteristics of routing bottlenecks, including size, link type, and distance from host destinations, and suggest specific structural and operational countermeasures to linkflooding attacks. These countermeasures can be deployed by network operators without major Internet redesign.</p
Results on Vertex Degree and K-Connectivity in Uniform S-Intersection Graphs (CMU-CyLab-14-004)
<p>We present results related to the vertex degree in a uniform s-intersection graph which has received much interest recently. Specifically, we derive the probability distribution for the minimum vertex degree, and show that the number of vertices with an arbitrary degree converges to a Poisson distribution. A uniform s-intersection graph models the topology of a secure wireless sensor network employing the widely used s-composite key predistribution scheme. Our theoretical findings is also confirmed by numerical results.</p
Topological Properties of Wireless Sensor Networks Under the Q-Composite Key Predistribution Scheme With Unreliable Links (CMU-CyLab-14-002)
<p>The seminal q-composite key predistribution scheme [3] (IEEE S&P 2003) is used prevalently for secure communications in large-scale wireless sensor networks (WSNs). Yagan [12] (IEEE IT 2012) and we [15] (IEEE ISIT 2013) explore topological properties of WSNs employing the q-composite scheme in the case of q = 1 with unreliable communication links modeled as independent on/off channels. However, it is challenging to derive results for general q under such on/off channel model. In this paper, we resolve such challenge and investigate topological properties related to node degree in WSNs operating under the q-composite scheme and the on/off channel model. Our results apply to general q, yet there has not been any work in the literature reporting the corresponding results even for q = 1, which are stronger than those about node degree in [12], [15]. Specifically, we show that the number of nodes with an arbitrary degree asymptotically converges to a Poisson distribution, present the asymptotic probability distribution for the minimum node degree of the network, and establish the asymptotically exact probability for the property that the minimum node degree is at least an arbitrary value. Numerical experiments confirm the validity our analytical findings.</p>
<p>References:</p>
<p>[3] H. Chan, A. Perrig, and D. Song. Random key predistribution schemes for sensor networks. In Proc. of IEEE Symposium on Security and Privacy, May 2003.</p>
<p>[12] O. Ya˘gan. Performance of the Eschenauer-Gligor key distribution scheme under an on/off channel. IEEE Transactions on Information Theory, 58(6):3821–3835, June 2012</p>
<p>[15] J. Zhao, O. Ya˘gan, and V. Gligor. Secure k-connectivity in wireless sensor networks under an on/off channel model. In Proc. of IEEE ISIT, pages 2790–2794, 2013</p
Connectivity in Secure Wireless Sensor Networks under Transmission Constraints (CMU-CyLab-14-003)
In wireless sensor networks (WSNs), the Eschenauer–Gligor (EG) key pre-distribution scheme is a widely recognized way to secure communications. Although the connectivity properties of secure WSNs with the EG scheme have been extensively investigated, few results address physical transmission constraints. These constraints reflect real–world implementations of WSNs in which two sensors have to be within a certain distance from each other to communicate. In this paper, we present zero–one laws for connectivity in WSNs employing the EG scheme under transmission constraints. These laws improve recent results [1], [2] significantly and help specify the critical transmission ranges for connectivity. Our analytical findings, which are also confirmed via numerical experiments, provide precise guidelines for the design of secure WSNs in practice. In addition to secure WSNs, our theoretical results are also applied to frequency hopping of wireless networks, as discussed in detail..
References:
[1] B. Krishnan, A. Ganesh, and D. Manjunath. On connectivity thresholds in superposition of random key graphs on random geometric graphs. In Proc. IEEE ISIT, pages 2389–2393, 2013.
[2] K. Krzywdzinski and K. Rybarczyk. Geometric graphs with randomly deleted edges — connectivity and routing protocols. Mathematical Foundations of Computer Science, 6907:544–555, 2011.</p
DefAT: Dependable Connection Setup for Network Capabilities (CMU-CyLab-11-018)
Network-layer capabilities offer strong protection against link flooding by authorizing individual flows with unforgeable credentials (i.e., capabilities). However, the capability setup channel is vulnerable to flooding attacks that prevent legitimate clients from acquiring capabilities; i.e., in Denial of Capability (DoC) attacks. Based on the observation that the distribution of attack sources in the current Internet is highly non-uniform, we provide a router-level scheme, named DefAT (Defense via Aggregating Traffic), that confines the effects of DoC attacks to specified locales or neighborhoods (e.g., one or more administrative domains of the Internet). DefAT provides precise access guarantees for capability schemes, even in the face of flooding attacks. The effectiveness of DefAT is shown in two ways. First, we illstrate the precise link-access guarantees provided by DefAT via ns2 simulations. Second, we show the effectiveness of DefAT in the current Internet via Interent-scale simulations using real Internet topologies and attack distribution.</p
Lockdown: A Safe and Practical Environment for Security Applications (CMU-CyLab-09-011)
We describe, build, and evaluate Lockdown, a system that significantly increases the level of security for online transactions, even on a platform infested with malicious code. Lockdown provides the user with a highly-protected, yet also highly-constrained trusted environment for performing online transactions, as well as a high-performance, general-purpose environment for all other (non-security-sensitive) applications. A simple, user-friendly external interface allows the user to securely learn which environment is active and easily switch between them. We focus on making Lockdown deployable and usable today. Lockdown works with both Windows and Linux, and provides immediate improvements to security-sensitive tasks while imposing, on average, only 3% memory overhead and 2–7% storage overhead on non-security-related tasks
RelationGrams: Tie-Strength Visualization for User-Controlled Online Identity Authentication (CMU-CyLab-11-014)
Users experience a crisis of confidence for online activities in the current Internet. Unfortunately, the symptom of this crisis of confidence manifests itself through online attacks, where adversaries con users to extract money or valuable sensitive information. Instead of addressing the symptom, we investigate how to address the underlying cause, which is that the absence of humanly verifiable information for online entities prevents user authentication.
As an initial step in this endeavor, we consider the specific problem of how users can securely authenticate online identities (e.g., associate a Facebook ID with its owner). Based on prior social science research demonstrating that the strength of social ties is a useful indicator of trust in many real-world relationships, we explore how tie strength can be visualized using well-defined and measurable parameters. We then apply the visualization in the context of online friend invitations and propose a protocol for secure online identity authentication. We analyze the robustness of the protocol against adversaries who attempt to establish fraudulent online identities, and evaluate the usability in an actual implementation on a popular online social network (i.e., Facebook). We find that a tie-strength visualization is a useful primitive for online identity authentication.</p
Transparent Key Integrity (TKI): A Proposal for a Public-Key Validation Infrastructure (CMU-CyLab-12-016)
<p>Recent trends in public-key infrastructure research explore the tradeoff between decreased trust in certificate authorities (CAs), the level of security achieved, the communication overhead (bandwidth and latency) for setting up a secure connection (e.g., verified via SSL/TLS), and the availability with respect to verifiability of public key information. In this paper, we propose TKI as a new public-key validation infrastructure, where we reduce the level of trust in any CA and increase the security by achieving increased robustness in the case of CA key compromise. Compared to other proposals, we reduce the communication overhead associated with certificate validation during the existing SSL/TLS connection handshake and provide site owners with an optional time window to review potentially malicious key changes. Our design deters CA misbehavior by using a public log that records all certificate events, thereby enabling CAs' accountability for their actions. TKI will help reduce the trust in the hundreds of currently trusted CAs, reduce exposure to CA compromise, and enhance the security of SSL/TLS connection establishment.</p