43 research outputs found

    An enhanced OFDM light weight physical layer encryption scheme

    Get PDF
    The broadcast nature of wireless networks makes them susceptible to attacks by eavesdroppers than wired networks. Any untrusted node can eavesdrop on the medium, listen to transmissions and obtain sensitive information within the wireless network. In this paper, we propose a new mechanism which combines the advantages of two techniques namely iJam and OFDM phase encryption. Our modified mechanism makes iJam more bandwidth efficient by using Alamouti scheme to take advantage of the repetition inherent in its implementation. The adversary model is extended to the active adversary case, which has not been done in the original work of iJam and OFDM phase encryption. We propose, through a max min optimization model, a framework that maximizes the secrecy rate by means of a friendly jammer. We formulate a Zero-Sum game that captures the strategic decision making between the transmitter receiver pair and the adversary. We apply the fictitious play (FP) algorithm to reach the Nash equilibria (NE) of the game. Our simulation results show a significant improvement in terms of the ability of the eavesdropper to benefit from the received information over the traditional schemes, i.e. iJam or OFDM phase encryption

    Learning Everything about Anything: Webly-Supervised Visual Concept Learning

    Full text link
    Figure 1: We introduce a fully-automated method that, given any concept, discovers an exhaustive vocabulary explaining all its appearance variations (i.e., actions, interactions, attributes, etc.), and trains full-fledged detection models for it. This figure shows a few of the many variations that our method has learned for four different classes of concepts: object (horse), scene (kitchen), event (Christmas), and action (walking). Recognition is graduating from labs to real-world ap-plications. While it is encouraging to see its potential being tapped, it brings forth a fundamental challenge to the vision researcher: scalability. How can we learn a model for any concept that exhaustively covers all its appearance varia-tions, while requiring minimal or no human supervision for compiling the vocabulary of visual variance, gathering the training images and annotations, and learning the models? In this paper, we introduce a fully-automated approach for learning extensive models for a wide range of variations (e.g. actions, interactions, attributes and beyond) within any concept. Our approach leverages vast resources of on-line books to discover the vocabulary of variance, and in-tertwines the data collection and modeling steps to alleviate the need for explicit human supervision in training the mod-els. Our approach organizes the visual knowledge about a concept in a convenient and useful way, enabling a variety of applications across vision and NLP. Our online system has been queried by users to learn models for several inter-esting concepts including breakfast, Gandhi, beautiful, etc. To date, our system has models available for over 50,000 variations within 150 concepts, and has annotated more than 10 million images with bounding boxes. 1
    corecore