304 research outputs found

    Proximity as a Service via Cellular Network-Assisted Mobile Device-to-Device

    Get PDF
    PhD ThesisThe research progress of communication has brought a lot of novel technologies to meet the multi-dimensional demands such as pervasive connection, low delay and high bandwidth. Device-to-Device (D2D) communication is a way to no longer treat the User Equipment (UEs) as a terminal, but rather as a part of the network for service provisioning. This thesis decouples UEs into service providers (helpers) and service requesters. By collaboration among proximal devices, with the coordination of cellular networks, some local tasks can be achieved, such as coverage extension, computation o oading, mobile crowdsourcing and mobile crowdsensing. This thesis proposes a generic framework Proximity as a Service (PaaS) for increasing the coverage with demands of service continuity. As one of the use cases, the optimal helper selection algorithm of PaaS for increasing the service coverage with demands of service continuity is called ContAct based Proximity (CAP). Mainly, fruitful contact information (e.g., contact duration, frequency, and interval) is captured, and is used to handle ubiquitous proximal services through the optimal selection of helpers. The nature of PaaS is evaluated under the Helsinki city scenario, with movement model of Points Of Interest (POI) and with critical factors in uencing the service demands (e.g., success ratio, disruption duration and frequency). Simulation results show the advantage of CAP, in both success ratio and continuity of the service (outputs). Based on this perspective, metrics such as service success ratio and continuity as a service evaluation of the PaaS are evaluated using the statistical theory of the Design Of Experiments (DOE). DOE is used as there are many dimensions to the state space (access tolerance, selected helper number, helper access limit, and transmit range) that can in uence the results. A key contribution of this work is that it brings rigorous statistical experiment design methods into the research into mobile computing. Results further reveal the influence of four factors (inputs), e.g., service tolerance, number of helpers allocated, the number of concurrent devices supported by each helper and transmit range. Based on this perspective, metrics such as service success ratio and continuity are evaluated using DOE. The results show that transmit range is the most dominant factor. The number of selected helpers is the second most dominant factor. Since di erent factors have di erent regression levels, a uni ed 4 level full factorial experiment and a cubic multiple regression analysis have been carried out. All the interactions and the corresponding coe cients have been found. This work is the rst one to evaluate LTE-Direct and WiFi-Direct in an opportunistic proximity service. The contribution of the results for industry is to guide how many users need to cooperate to enable mobile computing and for academia. This reveals the facts that: 1, in some cases, the improvement of spectrum e ciency brought by D2D is not important; 2, nodal density and the resources used in D2D air-interfaces are important in the eld of mobile computing. This work built a methodology to study the D2D networks with a di erent perspective (PaaS)

    Learning to predict under a budget

    Get PDF
    Prediction-time budgets in machine learning applications can arise due to monetary or computational costs associated with acquiring information; they also arise due to latency and power consumption costs in evaluating increasingly more complex models. The goal in such budgeted prediction problems is to learn decision systems that maintain high prediction accuracy while meeting average cost constraints during prediction-time. Such decision systems can potentially adapt to the input examples, predicting most of them at low cost while allocating more budget for the few "hard" examples. In this thesis, I will present several learning methods to better trade-off cost and error during prediction. The conceptual contribution of this thesis is to develop a new paradigm of bottom-up approach instead of the traditional top-down approach. A top-down approach attempts to build out the model by selectively adding the most cost-effective features to improve accuracy. In contrast, a bottom-up approach first learns a highly accurate model and then prunes or adaptively approximates it to trade-off cost and error. Training top-down models in case of feature acquisition costs leads to fundamental combinatorial issues in multi-stage search over all feature subsets. In contrast, we show that the bottom-up methods bypass many of such issues. To develop this theme, we first propose two top-down methods and then two bottom-up methods. The first top-down method uses margin information from training data in the partial feature neighborhood of a test point to either select the next best feature in a greedy fashion or to stop and make prediction. The second top-down method is a variant of random forest (RF) algorithm. We grow decision trees with low acquisition cost and high strength based on greedy mini-max cost-weighted impurity splits. Theoretically, we establish near-optimal acquisition cost guarantees for our algorithm. The first bottom-up method we propose is based on pruning RFs to optimize expected feature cost and accuracy. Given a RF as input, we pose pruning as a novel 0-1 integer program and show that it can be solved exactly via LP relaxation. We further develop a fast primal-dual algorithm that scales to large datasets. The second bottom-up method is adaptive approximation, which significantly generalizes the RF pruning to accommodate more models and other types of costs besides feature acquisition cost. We first train a high-accuracy, high-cost model. We then jointly learn a low-cost gating function together with a low-cost prediction model to adaptively approximate the high-cost model. The gating function identifies the regions of the input space where the low-cost model suffices for making highly accurate predictions. We demonstrate empirical performance of these methods and compare them to the state-of-the-arts. Finally, we study adaptive approximation in the on-line setting to obtain regret guarantees and discuss future work.2019-07-02T00:00:00

    Radio Communications

    Get PDF
    In the last decades the restless evolution of information and communication technologies (ICT) brought to a deep transformation of our habits. The growth of the Internet and the advances in hardware and software implementations modiļ¬ed our way to communicate and to share information. In this book, an overview of the major issues faced today by researchers in the ļ¬eld of radio communications is given through 35 high quality chapters written by specialists working in universities and research centers all over the world. Various aspects will be deeply discussed: channel modeling, beamforming, multiple antennas, cooperative networks, opportunistic scheduling, advanced admission control, handover management, systems performance assessment, routing issues in mobility conditions, localization, web security. Advanced techniques for the radio resource management will be discussed both in single and multiple radio technologies; either in infrastructure, mesh or ad hoc networks

    GAME THEORETIC APPROACH TO RADIO RESOURCE MANAGEMENT ON THE REVERSE LINK FOR MULTI-RATE CDMA WIRELESS DATA NETWORKS

    Get PDF
    This work deals with efficient power and rate assignment to mobile stations (MSs) involved in bursty data transmission in cellular CDMA networks. Power control in the current CDMA standards is based on a fixed target signal quality called signal to interference ratio (SIR). The target SIR represents a predefined frame error rate (FER). This approach is inefficient for data-MSs because a fixed target SIR can limit the MS's throughput. Power control should thus provide dynamic target SIRs instead of a fixed target SIR. In the research literature, the power control problem has been modeled using game theory. A limitation of the current literature is that in order to implement the algorithms, each MS needs to know information such as path gains and transmission rates of all other MSs. Fast rate control schemes in the evolving cellular data systems such as cdma2000-1x-EV assign transmission rates to MSs using a probabilistic approach. The limitation here is that the radio resources can be either under or over-utilized. Further, all MSs are not assigned the same rates. In the schemes proposed in the literature, only few MSs, which have the best channel conditions, obtain all radio resources. In this dissertation, we address the power control issue by moving the computation of the Nash equilibrium from each MS to the base station (BS). We also propose equal radio resource allocation for all MSs under the constraint that only the maximum allowable radio resources are used in a cell. This dissertation addresses the problem of how to efficiently assign power and rate to MSs based on dynamic target SIRs for bursty transmissions. The proposed schemes in this work maximize the throughput of each data-MS while still providing equal allocation of radio resources to all MSs and achieving full radio resource utilization in each cell. The proposed schemes result in power and rate control algorithms that however require some assistance from the BS. The performance evaluation and comparisons with cdma2000-1x-Evolution Data Only (1x-EV-DO) show that the proposed schemes can provide better effective rates (rates after error) than the existing schemes

    Vehicular Networks with Infrastructure: Modeling, Simulation and Testbed

    Get PDF
    This thesis focuses on Vehicular Networks with Infrastructure. In the examined scenarios, vehicular nodes (e.g., cars, buses) can communicate with infrastructure roadside units (RSUs) providing continuous or intermittent coverage of an urban road topology. Different aspects related to the design of new applications for Vehicular Networks are investigated through modeling, simulation and testing on real field. In particular, the thesis: i) provides a feasible multi-hop routing solution for maintaining connectivity among RSUs, forming the wireless mesh infrastructure, and moving vehicles; ii) explains how to combine the UHF and the traditional 5-GHz bands to design and implement a new high-capacity high-efficiency Content Downloading using disjoint control and service channels; iii) studies new RSUs deployment strategies for Content Dissemination and Downloading in urban and suburban scenarios with different vehicles mobility models and traffic densities; iv) defines an optimization problem to minimize the average travel delay perceived by the drivers, spreading different traffic flows over the surface roads in a urban scenario; v) exploits the concept of Nash equilibrium in the game-theory approach to efficiently guide electric vehicles drivers' towards the charging stations. Moreover, the thesis emphasizes the importance of using realistic mobility models, as well as reasonable signal propagation models for vehicular networks. Simplistic assumptions drive to trivial mathematical analysis and shorter simulations, but they frequently produce misleading results. Thus, testing the proposed solutions in the real field and collecting measurements is a good way to double-check the correctness of our studie

    Cooperation Strategies for Enhanced Connectivity at Home

    Get PDF
    WHILE AT HOME , USERS MAY EXPERIENCE A POOR I NTERNET SERVICE while being connected to their 802.11 Access Points (APs). The AP is just one component of the Internet Gateway (GW) that generally includes a backhaul connection (ADSL, fiber,etc..) and a router providing a LAN. The root cause of performance degradation may be poor/congested wireless channel between the user and the GW or congested/bandwidth limited backhaul connection. The latter is a serious issue for DSL users that are located far from the central office because the greater the distance the lesser the achievable physical datarate. Furthermore, the GW is one of the few devices in the home that is left always on, resulting in energy waste and electromagnetic pollution increase. This thesis proposes two strategies to enhance Internet connectivity at home by (i) creating a wireless resource sharing scheme through the federation and the coordination of neighboring GWs in order to achieve energy efficiency while avoiding congestion, (ii) exploiting different king of connectivities, i.e., the wired plus the cellular (3G/4G) connections, through the aggregation of the available bandwidth across multiple access technologies. In order to achieve the aforementioned strategies we study and develop: ā€¢ A viable interference estimation technique for 802.11 BSSes that can be implemented on commodity hardware at the MAC layer, without requiring active measurements, changes in the 802.11 standard, cooperation from the wireless stations (WSs). We extend previous theoretical results on the saturation throughput in order to quantify the impact in term of throughput loss of any kind of interferer. We im- plement and extensively evaluate our estimation technique with a real testbed and with different kind of interferer, achieving always good accuracy. ā€¢ Two available bandwidth estimation algorithms for 802.11 BSSes that rely only on passive measurements and that account for different kind of interferers on the ISM band. This algorithms can be implemented on commodity hardware, as they require only software modifications. The first algorithm applies to intra-GW while the second one applies to inter-GW available bandwidth estimation. Indeed, we use the first algorithm to compute the metric for assessing the Wi-Fi load of a GW and the second one to compute the metric to decide whether accept incoming WSs from neighboring GWs or not. Note that in the latter case it is assumed that one or more WSs with known traffic profile are requested to relocate from one GW to another one. We evaluate both algorithms with simulation as well as with a real test-bed for different traffic patterns, achieving high precision. ā€¢ A fully distributed and decentralized inter-access point protocol for federated GWs that allows to dynamically manage the associations of the wireless stations (WSs) in the federated network in order to achieve energy efficiency and offloading con- gested GWs, i.e, we keep a minimum number of GWs ON while avoiding to create congestion and real-time throughput loss. We evaluate this protocol in a federated scenario, using both simulation and a real test-bed, achieving up to 65% of energy saving in the simulated setting. We compare the energy saving achieved by our protocol against a centralized optimal scheme, obtaining close to optimal results. ā€¢ An application level solution that accelerates slow ADSL connections with the parallel use of cellular (3G/4G) connections. We study the feasibility and the potential performance of this scheme at scale using both extensive throughput measurement of the cellular network and trace driven analysis. We validate our solution by implementing a real test bed and evaluating it ā€œin the wild, at several residential locations of a major European city. We test two applications: Video-on-Demand (VoD) and picture upload, obtaining remarkable throughput increase for both applications at all locations. Our implementation features a multipath scheduler which we compare to other scheduling policies as well as to transport level solution like MTCP, obtaining always better results

    City of Augusta 2006 Annual Report

    Get PDF

    Social distributed content caching in federated residential networks

    Get PDF
    This work addresses the need for content sharing and backup in household equipped with a home gateway that stores, tags and manages the data collected by the home users. Our solution leverages the interaction between remote gateways in a social way, i.e., by exploiting the users' social networking information, so that caching recipients are those gateways whose users are most likely to be interested in accessing the shared content. We formulate this problem as a Budgeted Maximum Coverage (BMC) problem and we numerically compute the optimal content caching solution. We then propose a low-complexity, distributed heuristic algorithm and use simulation in a synthetic social network scenario to show that the final content placement among "friendly" gateways well approximates the optimal solution under different network setting
    • ā€¦
    corecore