295 research outputs found

    Joint Access Point Placement and Channel Assignment for 802.11 Wireless Local Area Networks

    Get PDF
    To deploy a multi-cell IEEE 802.11 wireless local area network (WLAN), access point (AP) placement and channel assignment are two primary design issues. For a given pattern of traffic demands, we aim at maximizing not only the overall system throughput, but also the fairness in resource sharing among mobile terminals. A novel method for estimating the system throughput of a multi-cell WLAN is proposed. An important feature of this method is that cochannel overlapping is allowed. Unlike conventional approaches that decouple AP placement and channel assignment into two phases, we propose to solve the two problems jointly for better performance. Due to the high computational complexity involved in exhaustive searching, an efficient local searching algorithm, called patching algorithm, is also designed. Numerical results show that for a typical indoor environment, the patching algorithm can provide a close-to-optimal performance with much lower time complexity.published_or_final_versio

    An Efficient Queueing Policy for Input-Buffered Packet Switches

    Get PDF
    An efficient self-adaptive packet queueing policy, called Queueing with Output Address Grouping (QOAG), is proposed for optimizing the performance of an input buffered packet switch. Each input port of the N×N switch under consideration has Q queues and each queue has B packet buffers, where 1<Q<N. Using QOAG, a packet arriving at an input port is assigned to the queue which has some backlog packets with the same output address as that of the new packet. If the output address of the new packet is different from all current buffered packets in all queues, it is assigned to the shortest queue. The performance of QOAG is compared with the Odd-Even queueing policy of Kolias and Kleinrock (see Proceedings of IEEE ICC '96, p.1674-79, 1996) by simulations. The Zipf distribution version II is used to model the non-uniform packet output distributions. We found that for a 16×16 switch with B=20 buffers at each queue and input load p=0.7, the mean packet delays are 58.1 and 91.2 time slots and the mean throughputs are 0.474 and 0.355 for using QOAG and Odd-Even queueing respectively. This represents a 57% cut in mean packet delay and 25% increase in throughput when QOAG is used.published_or_final_versio

    Link quality based EDCA MAC protocol for WAVE vehicular networks

    Get PDF
    The WAVE vehicular networks adopt the Enhanced Distributed Channel Access (EDCA) as the MAC layer protocol. In EDCA, different values of arbitrary inter-frame space (AIFS) can be used for different classes of traffic. The smaller the AIFS value is, the higher the priority a device has in accessing the shared channel. In this paper, we exploit the possibility of assigning the AIFS values according to channel/link quality. Notably a device with better link quality can transmit at a higher data rate. Therefore, our key objective is to maximize the system throughput between a roadside unit (RSU) and the onboard units (OBUs) passed by. Since IEEE 802.11p supports eight transmission rates, two schemes for mapping AIFS values to transmission rates are studied. The first one (8-level-AIFS) uses eight distinct AIFS values, one for each transmission rate. And the second one (4-level-AIFS) uses four distinct AIFS values, one for every two adjacent transmission rates. Their throughput performances are studied by simulations. It is interesting to note that OBUs tend to experience the same pattern of channel quality fluctuation, due to the similar vehicle moving pattern. To this end, assigning AIFS values according to link quality is fair. © 2013 IEEE.published_or_final_versio

    FTMS: an efficient multicast scheduling algorithm for feedback-based two-stage switch

    Get PDF
    Session - NGNI02: Router Architecture & Switch DesignTwo major challenges in designing high-speed multicast switches are the expensive multicast switch fabric and the highly complicated central scheduler. While the recent load-balanced switch architecture uses simple unicast switch fabric and does not require a central scheduler, it is only good at handling unicast traffic. In this paper, we extend an existing load-balanced switch called feedback-based two-stage switch to support multicast traffic. In particular, an efficient multicast scheduling algorithm (FTMS) is designed. With FTMS, head-of-line (HOL) packet blocking at each input port is eliminated by adopting 'pointer' queues. To cut down queuing delay, packet replication is carried out at middle-stage ports. As compared with other multicast scheduling algorithms, simulation results show that our FTMS always provides the highest throughput. © 2012 IEEE.published_or_final_versio

    Routing and re-routing in a LEO/MEO two-tier mobile satellite communications system with inter-satellite links

    Get PDF
    A novel LEO/MEO two-tier satellite communication system with inter-satellite links (ISLs) is proposed for providing multimedia services to global mobile users. This two-tier system architecture can reduce the transmission delay for long-distance users via MEO satellites while keeping the benefits of using LEO satellites as the service access nodes. The routing and re-routing during a handoff operation is simplified. Since the physical topology of the underlying network is time-dependent, routing is crucial for guaranteeing the delay and delay variation performance for interactive applications. We decompose the routing problem into two parts, routing in the access network and routing in the core MEO ISL network. For the access network, a new routing algorithm called the maximum holding access protocol (MHAP) is proposed for minimizing the number of LEO handoffs. For core MEO ISL network, both minimum transmission delay routing (MTDR) and minimum transmission time jitter routing (MTTJR) are investigated. Using computer simulations, we show that the proposed routing algorithms can reduce the probability of call re-routing and thus are very suitable for providing interactive multimedia services.published_or_final_versio

    On the scalability of feedback-based two-stage switch

    Get PDF
    The feedback-based two-stage switch does not require a central scheduler and can provide close to 100% throughput [3]. But the number of crosspoints required for the two stages of switch fabric is 2N2, and the average packet delay performance (even under light traffic load) is on the order of O(N) slots, where N is the switch size. To improve the performance of feedback-based two-stage switch when N is large, we adopt the Clos network for constructing a large switch from a set of smaller feedback-based switch modules. We call it a Clos-feedback switch. The potential problem of packet mis-sequencing is solved by using application-flow based load balancing. With recursive decomposition, a Clos network can degenerate into a Benes network. We show that for a Clos-feedback switch, the number of crosspoints required is reduced to 4N(2 log2N-1) and the average packet delay is cut down to O(log 2 N) slots. © 2012 IEEE.published_or_final_versio

    Monitoring trail: on fast link failure localization in all-optical WDM mesh networks

    Get PDF
    We consider an optical layer monitoring mechanism for fast link failure localization in all-optical wavelength-division-multiplexing (WDM) mesh networks. A novel framework of all-optical monitoring, called monitoring trail (m-trail), is introduced. It differs from the existing monitoring cycle (m-cycle) method by removing the cycle constraint. As a result, m-trail provides a general all-optical monitoring structure, which includes simple, nonsimple m-cycles, and open trails as special cases. Based on an in-depth theoretical analysis, we formulate an efficient integer linear program (ILP) for m-trail design to achieve unambiguous localization of each link failure. The objective is to minimize the monitoring cost (i.e., monitor cost plus bandwidth cost) of all m-trails in the solution. Numerical results show that the proposed m-trail scheme significantly outperforms its m-cycle-based counterpart.published_or_final_versio

    Monitoring Cycle Design for Fast Link Failure Localization in All-Optical Networks

    Get PDF
    A monitoring cycle (m-cycle) is a preconfigured optical loop-back connection of supervisory wavelengths with a dedicated monitor. In an all-optical network (AON), if a link fails, the supervisory optical signals in a set of m-cycles covering this link will be disrupted. The link failure can be localized using the alarm code generated by the corresponding monitors. In this paper, we first formulate an optimal integer linear program (ILP) for m-cycle design. The objective is to minimize the monitoring cost which consists of the monitor cost and the bandwidth cost (i.e., supervisory wavelength-links). To reduce the ILP running time, a heuristic ILP is also formulated. To the best of our survey, this is the first effort in m-cycle design using ILP, and it leads to two contributions: 1) nonsimple m-cycles are considered; and 2) an efficient tradeoff is allowed between the monitor cost and the bandwidth cost. Numerical results show that our ILP-based approach outperforms the existing m-cycle design algorithms with a significant performance gain.published_or_final_versio

    Request-peer selection for load-balancing in P2P live streaming systems

    Get PDF
    Theme: Services, Applications and BusinessUnlike peer-to-peer (P2P) file sharing, P2P live streaming systems have to meet real-time playback constraints, which makes it very challenging yet crucial to maximize the peer uplink bandwidth utilization so as to deliver content pieces in time. In general, this is achieved by adopting tailor-made piece selection and request-peer selection algorithms. The design philosophy is to regulate the network traffic and to balance the load among peers. In this paper, we propose a new request-peer selection algorithm. In particular, a peer in the network estimates the service response time (SRT) between itself and each neighboring peer. An SRT is measured from when a data piece request is sent until the requested piece arrives. When a peer makes a piece request, the neighbor with smaller SRT and fewer data pieces would be favored among potential providers. This is because smaller SRT implies excess serving capacity and fewer data pieces suggests less piece requests received. We evaluate the performance of our request-peer selection algorithm through extensive packet level simulations. Our simulation results show that the traffic load in the network is better balanced in the sense that the difference of the normalized number of data packets uploaded by each peer is getting smaller and the number of repeated piece requests generated by each peer (due to request failure) is significantly reduced. We also found that the load of streaming server is reduced, and the overall quality of service, measured by playback continuity, startup delay etc, is improved as well. © 2012 IEEE.published_or_final_versio

    A novel push-and-pull hybrid data broadcast scheme for wireless information networks

    Get PDF
    A new push-and-pull hybrid data broadcast scheme is proposed for providing wireless information services to three types of clients, general, pull and priority clients. Only pull and priority clients have the back channel for sending requests to the broadcast server. There is no scalability problem with the hybrid scheme because the amount of pull and priority clients is very small. Based on the requests collected from pull and priority clients, the server estimates the interest pattern changes of the whole client population. Then the broadcast schedule on the push channel for the next broadcast cycle is adjusted. Besides the push channel, a small amount of broadcast bandwidth is allocated to a pull channel. The data to be broadcast on the pull channel is decided by the server in real-time and priority is given to requests from priority clients. Simulations show that with a time-varying client interest pattern, the average data access time for all three types of clients can be minimized. Because of the priority in using the pull channel, priority clients can achieve the lowest access time and pull clients can achieve a lower access time than general clients. To further improve the performance, the hybrid scheme with local client cache is also investigated.published_or_final_versio
    • …
    corecore