5,820 research outputs found

    PSA: The Packet Scheduling Algorithm for Wireless Sensor Networks

    Full text link
    The main cause of wasted energy consumption in wireless sensor networks is packet collision. The packet scheduling algorithm is therefore introduced to solve this problem. Some packet scheduling algorithms can also influence and delay the data transmitting in the real-time wireless sensor networks. This paper presents the packet scheduling algorithm (PSA) in order to reduce the packet congestion in MAC layer leading to reduce the overall of packet collision in the system The PSA is compared with the simple CSMA/CA and other approaches using network topology benchmarks in mathematical method. The performances of our PSA are better than the standard (CSMA/CA). The PSA produces better throughput than other algorithms. On other hand, the average delay of PSA is higher than previous works. However, the PSA utilizes the channel better than all algorithms

    Multi-View Video Packet Scheduling

    Full text link
    In multiview applications, multiple cameras acquire the same scene from different viewpoints and generally produce correlated video streams. This results in large amounts of highly redundant data. In order to save resources, it is critical to handle properly this correlation during encoding and transmission of the multiview data. In this work, we propose a correlation-aware packet scheduling algorithm for multi-camera networks, where information from all cameras are transmitted over a bottleneck channel to clients that reconstruct the multiview images. The scheduling algorithm relies on a new rate-distortion model that captures the importance of each view in the scene reconstruction. We propose a problem formulation for the optimization of the packet scheduling policies, which adapt to variations in the scene content. Then, we design a low complexity scheduling algorithm based on a trellis search that selects the subset of candidate packets to be transmitted towards effective multiview reconstruction at clients. Extensive simulation results confirm the gain of our scheduling algorithm when inter-source correlation information is used in the scheduler, compared to scheduling policies with no information about the correlation or non-adaptive scheduling policies. We finally show that increasing the optimization horizon in the packet scheduling algorithm improves the transmission performance, especially in scenarios where the level of correlation rapidly varies with time

    Packet scheduling under imperfect channel conditions in Long Term Evolution (LTE)

    Full text link
    University of Technology, Sydney. Faculty of Engineering and Information Technology.The growing demand for high speed wireless data services, such as Voice Over Internet Protocol (VoIP), web browsing, video streaming and gaming, with constraints on system capacity and delay requirements, poses new challenges in future mobile cellular systems. Orthogonal Frequency Division Multiple Access (OFDMA) is the preferred access technology for downlink Long Term Evolution (LTE) standardisation as a solution to the challenges. As a network based on an all-IP packet switched architecture, LTE employs packet scheduling to satisfy Quality of Service (QoS) requirements. Therefore, efficient design of packet scheduling becomes a fundamental issue. The aim of this thesis is to propose a novel packet scheduling algorithm to improve system performance for practical downlink LTE system. This thesis first focuses on time domain packet scheduling algorithms. A number of time domain packet scheduling algorithms are studied and some well-known time domain packet scheduling algorithms are compared in downlink LTE. A packet scheduling algorithm is identified that it is able to provide a better trade-off between maximizing the system performance and guaranteeing the fairness. Thereafter, some frequency domain packet schemes are introduced and examples of QoS aware packet scheduling algorithms employing these schemes are presented. To balance the scheduling performance and computational complexity and be tolerant to the time-varying wireless channel, a novel scheduling scheme and a packet scheduling algorithm are proposed. Simulation results show this proposed algorithm achieves an overall reasonable system performance. Packet scheduling is further studied in a practical channel condition environment which assumes imperfect Channel Quality Information (CQI). To alleviate the performance degradation due to simultaneous multiple imperfect channel conditions, a packet scheduling algorithm based on channel prediction and the proposed scheduling scheme is developed in downlink LTE system for GBR services. It was shown in simulation results that the Kalman filter based channel predictor can effectively recover the correct CQI from erroneous channel quality feedback, therefore, the system performance is significantly improved

    Dynamic Packet Scheduling in Wireless Networks

    Full text link
    We consider protocols that serve communication requests arising over time in a wireless network that is subject to interference. Unlike previous approaches, we take the geometry of the network and power control into account, both allowing to increase the network's performance significantly. We introduce a stochastic and an adversarial model to bound the packet injection. Although taken as the primary motivation, this approach is not only suitable for models based on the signal-to-interference-plus-noise ratio (SINR). It also covers virtually all other common interference models, for example the multiple-access channel, the radio-network model, the protocol model, and distance-2 matching. Packet-routing networks allowing each edge or each node to transmit or receive one packet at a time can be modeled as well. Starting from algorithms for the respective scheduling problem with static transmission requests, we build distributed stable protocols. This is more involved than in previous, similar approaches because the algorithms we consider do not necessarily scale linearly when scaling the input instance. We can guarantee a throughput that is as large as the one of the original static algorithm. In particular, for SINR models the competitive ratios of the protocol in comparison to optimal ones in the respective model are between constant and O(log^2 m) for a network of size m.Comment: 23 page
    • …
    corecore