64,393 research outputs found

    Low-Latency Millimeter-Wave Communications: Traffic Dispersion or Network Densification?

    Full text link
    This paper investigates two strategies to reduce the communication delay in future wireless networks: traffic dispersion and network densification. A hybrid scheme that combines these two strategies is also considered. The probabilistic delay and effective capacity are used to evaluate performance. For probabilistic delay, the violation probability of delay, i.e., the probability that the delay exceeds a given tolerance level, is characterized in terms of upper bounds, which are derived by applying stochastic network calculus theory. In addition, to characterize the maximum affordable arrival traffic for mmWave systems, the effective capacity, i.e., the service capability with a given quality-of-service (QoS) requirement, is studied. The derived bounds on the probabilistic delay and effective capacity are validated through simulations. These numerical results show that, for a given average system gain, traffic dispersion, network densification, and the hybrid scheme exhibit different potentials to reduce the end-to-end communication delay. For instance, traffic dispersion outperforms network densification, given high average system gain and arrival rate, while it could be the worst option, otherwise. Furthermore, it is revealed that, increasing the number of independent paths and/or relay density is always beneficial, while the performance gain is related to the arrival rate and average system gain, jointly. Therefore, a proper transmission scheme should be selected to optimize the delay performance, according to the given conditions on arrival traffic and system service capability

    Tractable Resource Management with Uplink Decoupled Millimeter-Wave Overlay in Ultra-Dense Cellular Networks

    Full text link
    The forthcoming 5G cellular network is expected to overlay millimeter-wave (mmW) transmissions with the incumbent micro-wave ({\mu}W) architecture. The overall mm-{\mu}W resource management should therefore harmonize with each other. This paper aims at maximizing the overall downlink (DL) rate with a minimum uplink (UL) rate constraint, and concludes: mmW tends to focus more on DL transmissions while {\mu}W has high priority for complementing UL, under time-division duplex (TDD) mmW operations. Such UL dedication of {\mu}W results from the limited use of mmW UL bandwidth due to excessive power consumption and/or high peak-to-average power ratio (PAPR) at mobile users. To further relieve this UL bottleneck, we propose mmW UL decoupling that allows each legacy {\mu}W base station (BS) to receive mmW signals. Its impact on mm-{\mu}W resource management is provided in a tractable way by virtue of a novel closed-form mm-{\mu}W spectral efficiency (SE) derivation. In an ultra-dense cellular network (UDN), our derivation verifies mmW (or {\mu}W) SE is a logarithmic function of BS-to-user density ratio. This strikingly simple yet practically valid analysis is enabled by exploiting stochastic geometry in conjunction with real three dimensional (3D) building blockage statistics in Seoul, Korea.Comment: to appear in IEEE Transactions on Wireless Communications (17 pages, 11 figures, 1 table

    Spatial and Social Paradigms for Interference and Coverage Analysis in Underlay D2D Network

    Get PDF
    The homogeneous Poisson point process (PPP) is widely used to model spatial distribution of base stations and mobile terminals. The same process can be used to model underlay device-to-device (D2D) network, however, neglecting homophilic relation for D2D pairing presents underestimated system insights. In this paper, we model both spatial and social distributions of interfering D2D nodes as proximity based independently marked homogeneous Poisson point process. The proximity considers physical distance between D2D nodes whereas social relationship is modeled as Zipf based marks. We apply these two paradigms to analyze the effect of interference on coverage probability of distance-proportional power-controlled cellular user. Effectively, we apply two type of functional mappings (physical distance, social marks) to Laplace functional of PPP. The resulting coverage probability has no closed-form expression, however for a subset of social marks, the mark summation converges to digamma and polygamma functions. This subset constitutes the upper and lower bounds on coverage probability. We present numerical evaluation of these bounds on coverage probability by varying number of different parameters. The results show that by imparting simple power control on cellular user, ultra-dense underlay D2D network can be realized without compromising the coverage probability of cellular user.Comment: 10 pages, 10 figure

    Adaptive Contract Design for Crowdsourcing Markets: Bandit Algorithms for Repeated Principal-Agent Problems

    Full text link
    Crowdsourcing markets have emerged as a popular platform for matching available workers with tasks to complete. The payment for a particular task is typically set by the task's requester, and may be adjusted based on the quality of the completed work, for example, through the use of "bonus" payments. In this paper, we study the requester's problem of dynamically adjusting quality-contingent payments for tasks. We consider a multi-round version of the well-known principal-agent model, whereby in each round a worker makes a strategic choice of the effort level which is not directly observable by the requester. In particular, our formulation significantly generalizes the budget-free online task pricing problems studied in prior work. We treat this problem as a multi-armed bandit problem, with each "arm" representing a potential contract. To cope with the large (and in fact, infinite) number of arms, we propose a new algorithm, AgnosticZooming, which discretizes the contract space into a finite number of regions, effectively treating each region as a single arm. This discretization is adaptively refined, so that more promising regions of the contract space are eventually discretized more finely. We analyze this algorithm, showing that it achieves regret sublinear in the time horizon and substantially improves over non-adaptive discretization (which is the only competing approach in the literature). Our results advance the state of art on several different topics: the theory of crowdsourcing markets, principal-agent problems, multi-armed bandits, and dynamic pricing.Comment: This is the full version of a paper in the ACM Conference on Economics and Computation (ACM-EC), 201
    • …
    corecore