4,920 research outputs found

    Adaptive Network Coding for Scheduling Real-time Traffic with Hard Deadlines

    Full text link
    We study adaptive network coding (NC) for scheduling real-time traffic over a single-hop wireless network. To meet the hard deadlines of real-time traffic, it is critical to strike a balance between maximizing the throughput and minimizing the risk that the entire block of coded packets may not be decodable by the deadline. Thus motivated, we explore adaptive NC, where the block size is adapted based on the remaining time to the deadline, by casting this sequential block size adaptation problem as a finite-horizon Markov decision process. One interesting finding is that the optimal block size and its corresponding action space monotonically decrease as the deadline approaches, and the optimal block size is bounded by the "greedy" block size. These unique structures make it possible to narrow down the search space of dynamic programming, building on which we develop a monotonicity-based backward induction algorithm (MBIA) that can solve for the optimal block size in polynomial time. Since channel erasure probabilities would be time-varying in a mobile network, we further develop a joint real-time scheduling and channel learning scheme with adaptive NC that can adapt to channel dynamics. We also generalize the analysis to multiple flows with hard deadlines and long-term delivery ratio constraints, devise a low-complexity online scheduling algorithm integrated with the MBIA, and then establish its asymptotical throughput-optimality. In addition to analysis and simulation results, we perform high fidelity wireless emulation tests with real radio transmissions to demonstrate the feasibility of the MBIA in finding the optimal block size in real time.Comment: 11 pages, 13 figure

    Dynamic Network State Learning Model for Mobility Based WMSN Routing Protocol

    Get PDF
    The rising demand of wireless multimedia sensor networks (WMSNs) has motivated academia-industries to develop energy efficient, Quality of Service (QoS) and delay sensitive communication systems to meet major real-world demands like multimedia broadcast, security and surveillance systems, intelligent transport system, etc. Typically, energy efficiency, QoS and delay sensitive transmission are the inevitable requirements of WMSNs. Majority of the existing approaches either use physical layer or system level schemes that individually can’t assure optimal transmission decision to meet the demand. The cumulative efficiency of physical layer power control, adaptive modulation and coding and system level dynamic power management (DPM) are found significant to achieve these demands. With this motivation, in this paper a unified model is derived using enhanced reinforcement learning and stochastic optimization method. Exploiting physical as well as system level network state information, our proposed dynamic network state learning model (NSLM) applies stochastic optimization to learn network state-activity that derives an optimal DPM policy and PHY switching scheduling. NSLM applies known as well as unknown network state variables to derive transmission and PHY switching policy, where it considers DPM as constrained Markov decision process (MDP) problem. Here,the use of Hidden Markov Model and Lagrangian relaxation has made NSLM convergence swift that assures delay-sensitive, QoS enriched, and bandwidth and energy efficient transmission for WMSN under uncertain network conditions. Our proposed NSLM DPM model has outperformed traditional Q-Learning based DPM in terms of buffer cost, holding cost, overflow, energy consumption and bandwidth utilization
    • …
    corecore