9,095 research outputs found

    Security, Privacy and Safety Risk Assessment for Virtual Reality Learning Environment Applications

    Full text link
    Social Virtual Reality based Learning Environments (VRLEs) such as vSocial render instructional content in a three-dimensional immersive computer experience for training youth with learning impediments. There are limited prior works that explored attack vulnerability in VR technology, and hence there is a need for systematic frameworks to quantify risks corresponding to security, privacy, and safety (SPS) threats. The SPS threats can adversely impact the educational user experience and hinder delivery of VRLE content. In this paper, we propose a novel risk assessment framework that utilizes attack trees to calculate a risk score for varied VRLE threats with rate and duration of threats as inputs. We compare the impact of a well-constructed attack tree with an adhoc attack tree to study the trade-offs between overheads in managing attack trees, and the cost of risk mitigation when vulnerabilities are identified. We use a vSocial VRLE testbed in a case study to showcase the effectiveness of our framework and demonstrate how a suitable attack tree formalism can result in a more safer, privacy-preserving and secure VRLE system.Comment: Tp appear in the CCNC 2019 Conferenc

    Weak Resilience of Networked Control Systems

    Full text link
    In this paper, we propose a method to establish a networked control system that maintains its stability in the presence of certain undesirable incidents on local controllers. We call such networked control systems weakly resilient. We first derive a necessary and sufficient condition for the weak resilience of networked systems. Networked systems do not generally satisfy this condition. Therefore, we provide a method for designing a compensator which ensures the weak resilience of the compensated system. Finally, we illustrate the efficiency of the proposed method by a power system example based on the IEEE 14-bus test system

    Dovetail: Stronger Anonymity in Next-Generation Internet Routing

    Full text link
    Current low-latency anonymity systems use complex overlay networks to conceal a user's IP address, introducing significant latency and network efficiency penalties compared to normal Internet usage. Rather than obfuscating network identity through higher level protocols, we propose a more direct solution: a routing protocol that allows communication without exposing network identity, providing a strong foundation for Internet privacy, while allowing identity to be defined in those higher level protocols where it adds value. Given current research initiatives advocating "clean slate" Internet designs, an opportunity exists to design an internetwork layer routing protocol that decouples identity from network location and thereby simplifies the anonymity problem. Recently, Hsiao et al. proposed such a protocol (LAP), but it does not protect the user against a local eavesdropper or an untrusted ISP, which will not be acceptable for many users. Thus, we propose Dovetail, a next-generation Internet routing protocol that provides anonymity against an active attacker located at any single point within the network, including the user's ISP. A major design challenge is to provide this protection without including an application-layer proxy in data transmission. We address this challenge in path construction by using a matchmaker node (an end host) to overlap two path segments at a dovetail node (a router). The dovetail then trims away part of the path so that data transmission bypasses the matchmaker. Additional design features include the choice of many different paths through the network and the joining of path segments without requiring a trusted third party. We develop a systematic mechanism to measure the topological anonymity of our designs, and we demonstrate the privacy and efficiency of our proposal by simulation, using a model of the complete Internet at the AS-level

    An Adversarial Approach to Private Flocking in Mobile Robot Teams

    Get PDF
    Privacy is an important facet of defence against adversaries. In this letter, we introduce the problem of private flocking . We consider a team of mobile robots flocking in the presence of an adversary, who is able to observe all robots’ trajectories, and who is interested in identifying the leader. We present a method that generates private flocking controllers that hide the identity of the leader robot. Our approach towards privacy leverages a data-driven adversarial co-optimization scheme. We design a mechanism that optimizes flocking control parameters, such that leader inference is hindered. As the flocking performance improves, we successively train an adversarial discriminator that tries to infer the identity of the leader robot. To evaluate the performance of our co-optimization scheme, we investigate different classes of reference trajectories. Although it is reasonable to assume that there is an inherent trade-off between flocking performance and privacy, our results demonstrate that we are able to achieve high flocking performance and simultaneously reduce the risk of revealing the leader
    • …
    corecore