311 research outputs found

    Enhancing Physical Layer Security in AF Relay Assisted Multi-Carrier Wireless Transmission

    Full text link
    In this paper, we study the physical layer security (PLS) problem in the dual hop orthogonal frequency division multiplexing (OFDM) based wireless communication system. First, we consider a single user single relay system and study a joint power optimization problem at the source and relay subject to individual power constraint at the two nodes. The aim is to maximize the end to end secrecy rate with optimal power allocation over different sub-carriers. Later, we consider a more general multi-user multi-relay scenario. Under high SNR approximation for end to end secrecy rate, an optimization problem is formulated to jointly optimize power allocation at the BS, the relay selection, sub-carrier assignment to users and the power loading at each of the relaying node. The target is to maximize the overall security of the system subject to independent power budget limits at each transmitting node and the OFDMA based exclusive sub-carrier allocation constraints. A joint optimization solution is obtained through duality theory. Dual decomposition allows to exploit convex optimization techniques to find the power loading at the source and relay nodes. Further, an optimization for power loading at relaying nodes along with relay selection and sub carrier assignment for the fixed power allocation at the BS is also studied. Lastly, a sub-optimal scheme that explores joint power allocation at all transmitting nodes for the fixed subcarrier allocation and relay assignment is investigated. Finally, simulation results are presented to validate the performance of the proposed schemes.Comment: 10 pages, 7 figures, accepted in Transactions on Emerging Telecommunications Technologies (ETT), formerly known as European Transactions on Telecommunications (ETT

    A Comprehensive Survey of Potential Game Approaches to Wireless Networks

    Get PDF
    Potential games form a class of non-cooperative games where unilateral improvement dynamics are guaranteed to converge in many practical cases. The potential game approach has been applied to a wide range of wireless network problems, particularly to a variety of channel assignment problems. In this paper, the properties of potential games are introduced, and games in wireless networks that have been proven to be potential games are comprehensively discussed.Comment: 44 pages, 6 figures, to appear in IEICE Transactions on Communications, vol. E98-B, no. 9, Sept. 201

    Underlay Drone Cell for Temporary Events: Impact of Drone Height and Aerial Channel Environments

    Full text link
    Providing seamless connection to a large number of devices is one of the biggest challenges for the Internet of Things (IoT) networks. Using a drone as an aerial base station (ABS) to provide coverage to devices or users on ground is envisaged as a promising solution for IoT networks. In this paper, we consider a communication network with an underlay ABS to provide coverage for a temporary event, such as a sporting event or a concert in a stadium. Using stochastic geometry, we propose a general analytical framework to compute the uplink and downlink coverage probabilities for both the aerial and the terrestrial cellular system. Our framework is valid for any aerial channel model for which the probabilistic functions of line-of-sight (LOS) and non-line-of-sight (NLOS) links are specified. The accuracy of the analytical results is verified by Monte Carlo simulations considering two commonly adopted aerial channel models. Our results show the non-trivial impact of the different aerial channel environments (i.e., suburban, urban, dense urban and high-rise urban) on the uplink and downlink coverage probabilities and provide design guidelines for best ABS deployment height.Comment: This work is accepted to appear in IEEE Internet of Things Journal Special Issue on UAV over IoT. Copyright may be transferred without notice, after which this version may no longer be accessible. arXiv admin note: text overlap with arXiv:1801.0594

    GSAR: Greedy Stand-Alone Position-Based Routing protocol to avoid hole problem occurance in Mobile Ad Hoc Networks

    Get PDF
    The routing process in a Mobile Ad Hoc Network (MANET) poses critical challenges because of its features such as frequent topology changes and resource limitations. Hence, designing a reliable and dynamic routing protocol that satisfies MANET requirements is highly demanded. The Greedy Forwarding Strategy (GFS) has been the most used strategy in position-based routing protocols. The GFS algorithm was designed as a high-performance protocol that adopts hop count in soliciting shortest path. However, the GFS does not consider MANET needs and is therefore insufficient in computing reliable routes. Hence, this study aims to improve the existing GFS by transforming it into a dynamic stand-alone routing protocol that responds swiftly to MANET needs, and provides reliable routes among the communicating nodes. To achieve the aim, two mechanisms were proposed as extensions to the current GFS, namely the Dynamic Beaconing Updates Mechanism (DBUM) and the Dynamic and Reactive Reliability Estimation with Selective Metrics Mechanism (DRESM). The DBUM algorithm is mainly responsible for providing a node with up-to-date status information about its neighbours. The DRESM algorithm is responsible for making forwarding decisions based on multiple routing metrics. Both mechanisms were integrated into the conventional GFS to form Greedy Stand-Alone Routing (GSAR) protocol. Evaluations of GSAR were performed using network simulator Ns2 based upon a defined set of performance metrics, scenarios and topologies. The results demonstrate that GSAR eliminates recovery mode mechanism in GFS and consequently improve overall network performance. Under various mobility conditions, GSAR avoids hole problem by about 87% and 79% over Greedy Perimeter Stateless Routing and Position-based Opportunistic Routing Protocol respectively. Therefore, the GSAR protocol is a reasonable alternative to position-based unicast routing protocol in MANET

    ViLDAR - Visible Light Sensing Based Speed Estimation using Vehicle's Headlamps

    Full text link
    The introduction of light emitting diodes (LED) in automotive exterior lighting systems provides opportunities to develop viable alternatives to conventional communication and sensing technologies. Most of the advanced driver-assist and autonomous vehicle technologies are based on Radio Detection and Ranging (RADAR) or Light Detection and Ranging (LiDAR) systems that use radio frequency or laser signals, respectively. While reliable and real-time information on vehicle speeds is critical for traffic operations management and autonomous vehicles safety, RADAR or LiDAR systems have some deficiencies especially in curved road scenarios where the incidence angle is rapidly varying. In this paper, we propose a novel speed estimation system so-called the Visible Light Detection and Ranging (ViLDAR) that builds upon sensing visible light variation of the vehicle's headlamp. We determine the accuracy of the proposed speed estimator in straight and curved road scenarios. We further present how the algorithm design parameters and the channel noise level affect the speed estimation accuracy. For wide incidence angles, the simulation results show that the ViLDAR outperforms RADAR/LiDAR systems in both straight and curved road scenarios. A provisional patent (US#62/541,913) has been obtained for this work

    Optimal finite horizon sensing for wirelessly powered devices

    Get PDF
    We are witnessing a significant advancements in the sensor technologies which has enabled a broad spectrum of applications. Often, the resolution of the produced data by the sensors significantly affects the output quality of an application. We study a sensing resolution optimization problem for a wireless powered device (WPD) that is powered by wireless power transfer (WPT) from an access point (AP). We study a class of harvest-first-transmit-later type of WPT policy, where an access point (AP) first employs RF power to recharge the WPD in the down-link, and then, collects the data from the WPD in the up-link. The WPD optimizes the sensing resolution, WPT duration and dynamic power control in the up-link to maximize an application dependant utility at the AP. The utility of a transmitted packet is only achieved if the data is delivered successfully within a finite time. Thus, we first study a finite horizon throughput maximization problem by jointly optimizing the WPT duration and power control. We prove that the optimal WPT duration obeys a time-dependent threshold form depending on the energy state of the WPD. In the subsequent data transmission stage, the optimal transmit power allocations for the WPD is shown to posses a channel-dependent fractional structure. Then, we optimize the sensing resolution of the WPD by using a Bayesian inference based multi armed bandit problem with fast convergence property to strike a balance between the quality of the sensed data and the probability of successfully delivering it

    Energy Efficiency and Sum Rate when Massive MIMO meets Device-to-Device Communication

    Full text link
    This paper considers a scenario of short-range communication, known as device-to-device (D2D) communication, where D2D users reuse the downlink resources of a cellular network to transmit directly to their corresponding receivers. In addition, multiple antennas at the base station (BS) are used in order to simultaneously support multiple cellular users using multiuser or massive MIMO. The network model considers a fixed number of cellular users and that D2D users are distributed according to a homogeneous Poisson point process (PPP). Two metrics are studied, namely, average sum rate (ASR) and energy efficiency (EE). We derive tractable expressions and study the tradeoffs between the ASR and EE as functions of the number of BS antennas and density of D2D users for a given coverage area.Comment: 6 pages, 7 figures, to be presented at the IEEE International Conference on Communications (ICC) Workshop on Device-to-Device Communication for Cellular and Wireless Networks, London, UK, June 201

    Big Data Meets Telcos: A Proactive Caching Perspective

    Full text link
    Mobile cellular networks are becoming increasingly complex to manage while classical deployment/optimization techniques and current solutions (i.e., cell densification, acquiring more spectrum, etc.) are cost-ineffective and thus seen as stopgaps. This calls for development of novel approaches that leverage recent advances in storage/memory, context-awareness, edge/cloud computing, and falls into framework of big data. However, the big data by itself is yet another complex phenomena to handle and comes with its notorious 4V: velocity, voracity, volume and variety. In this work, we address these issues in optimization of 5G wireless networks via the notion of proactive caching at the base stations. In particular, we investigate the gains of proactive caching in terms of backhaul offloadings and request satisfactions, while tackling the large-amount of available data for content popularity estimation. In order to estimate the content popularity, we first collect users' mobile traffic data from a Turkish telecom operator from several base stations in hours of time interval. Then, an analysis is carried out locally on a big data platform and the gains of proactive caching at the base stations are investigated via numerical simulations. It turns out that several gains are possible depending on the level of available information and storage size. For instance, with 10% of content ratings and 15.4 Gbyte of storage size (87% of total catalog size), proactive caching achieves 100% of request satisfaction and offloads 98% of the backhaul when considering 16 base stations.Comment: 8 pages, 5 figure
    corecore