234 research outputs found

    WLAN Interface Management on Mobile Devices

    Get PDF
    The number of smartphones in use is overwhelmingly increasing every year. These devices rely on connectivity to the Internet for the majority of their applications. The ever-increasing number of deployed 802.11 wireless access points and the relatively high cost of other data services make the case for opportunistic communication using free WiFi hot-spots. However, this requires effective management of the WLAN interface, because by design the energy cost of WLAN scanning and interface idle operation is high and energy is a primary resource on mobile devices. This thesis studies the WLAN interface management problem on mobile devices. First, I consider the hypothetical scenario where future knowledge of wireless connectivity opportunities is available, and present a dynamic programming algorithm that finds the optimal schedule for the interface. In the absence of future knowledge, I propose several heuristic strategies for interface management, and use real-world user traces to evaluate and compare their performance against the optimal algorithm. Trace-based simulations show that simple static scanning with a suitable interval value is very effective for delay-tolerant, background applications. I attribute the good performance of static scanning to the power-law distribution of the length of the WiFi opportunities of mobile users, and provide guidelines for choosing the scanning interval based on the statistical properties of the traces. I improve the performance of static scanning, by 46% on average, using a local cache of previous scan results that takes advantage of the location hints provided by the set of visible GSM cell towers

    Infrastructure-Assisted Message Dissemination for Supporting Heterogeneous Driving Patterns

    Get PDF
    With the advances of Internet of Things technologies, individual vehicles can now exchange information to improve traffic safety, and some vehicles can further improve safety and efficiency by coordinating their mobility via cooperative driving. To facilitate these applications, many studies have been focused on the design of inter-vehicle message dissemination protocols. However, most existing designs either assume individual driving pattern or consider cooperative driving only. Moreover, few of them fully exploit infrastructures, such as cameras, sensors, and road-side units. In this paper, we address the design of message dissemination that supports heterogeneous driving patterns. Specifically, we first propose an infrastructure-assisted message dissemination framework that can utilize the capability of infrastructures. We then present a novel beacon scheduling algorithm that aims at guaranteeing the timely and reliable delivery of both periodic beacon messages for cooperative driving and event-triggered safety messages for individual driving. To evaluate the performance of the protocol, we develop both theoretical analysis and simulation experiments. Extensive numerical results confirm the effectiveness of the proposed protocol

    Machine Learning-based Orchestration Solutions for Future Slicing-Enabled Mobile Networks

    Get PDF
    The fifth generation mobile networks (5G) will incorporate novel technologies such as network programmability and virtualization enabled by Software-Defined Networking (SDN) and Network Function Virtualization (NFV) paradigms, which have recently attracted major interest from both academic and industrial stakeholders. Building on these concepts, Network Slicing raised as the main driver of a novel business model where mobile operators may open, i.e., “slice”, their infrastructure to new business players and offer independent, isolated and self-contained sets of network functions and physical/virtual resources tailored to specific services requirements. While Network Slicing has the potential to increase the revenue sources of service providers, it involves a number of technical challenges that must be carefully addressed. End-to-end (E2E) network slices encompass time and spectrum resources in the radio access network (RAN), transport resources on the fronthauling/backhauling links, and computing and storage resources at core and edge data centers. Additionally, the vertical service requirements’ heterogeneity (e.g., high throughput, low latency, high reliability) exacerbates the need for novel orchestration solutions able to manage end-to-end network slice resources across different domains, while satisfying stringent service level agreements and specific traffic requirements. An end-to-end network slicing orchestration solution shall i) admit network slice requests such that the overall system revenues are maximized, ii) provide the required resources across different network domains to fulfill the Service Level Agreements (SLAs) iii) dynamically adapt the resource allocation based on the real-time traffic load, endusers’ mobility and instantaneous wireless channel statistics. Certainly, a mobile network represents a fast-changing scenario characterized by complex spatio-temporal relationship connecting end-users’ traffic demand with social activities and economy. Legacy models that aim at providing dynamic resource allocation based on traditional traffic demand forecasting techniques fail to capture these important aspects. To close this gap, machine learning-aided solutions are quickly arising as promising technologies to sustain, in a scalable manner, the set of operations required by the network slicing context. How to implement such resource allocation schemes among slices, while trying to make the most efficient use of the networking resources composing the mobile infrastructure, are key problems underlying the network slicing paradigm, which will be addressed in this thesis

    A Time-Efficient Strategy For Relay Selection and Link Scheduling In Wireless Communication Networks

    Get PDF
    Despite the unprecedented success and proliferation of wireless communication, sustainable reliability and stability among wireless users are still considered important issues in the underlying link protocols. Existing link-layer protocols, like ARQ [44] or HARQ [57,67] approaches are designed to achieve this goal by discarding a corrupted packet at the receiver and performing one or more retransmissions until the packet is successfully decoded or a maximum number of retransmission attempts is reached. These strategies suffer from degradation of throughput and overall system instability since packets need to be en/decode in every hop, leading to high burden for relay nodes especially when the traffic load is high. On the other hand, due to the broadcast nature of wireless communication, when a relay transmits a packet to a specific receiver, it could become interference to other receivers. Thus, rather than activating all the relays simultaneously, we can only schedule a subset of relays in each time slot such that the interference among the links will not cause some transmissions to fail. Accordingly, in this dissertation, we mainly address the following two problems: 1) Relay selection: given a route (i.e., a sequence of relays), how to select the relays to en/decode packets to minimize the latency to reach the destination? 2) Link scheduling: how to schedule relays such that the interference among the relays will not cause transmission failure and the throughput is maximized? Relay Selection Problem. To solve the relay selection problem, we propose a Code Embedded Distributed Adaptive and Reliable (CEDAR) link-layer framework that targets low latency. CEDAR is the first theoretical framework for selecting en/decoding relays to minimize packet latency in wireless communication networks. It employs a theoretically-sound framework for embedding channel codes in each packet and performs the error correcting process in selected intermediate nodes in packet\u27s route. To identify the intermediate relay nodes for en/decoding to minimize average packet latency, we mathematically analyze the average packet delay, using Finite State Markovian Channel model and priority queuing model, and then formalize the problem as a non-linear integer programming problem. To solve this problem, we design a scalable and distributed scheme which has very low complexity. The experimental results demonstrate that CEDAR is superior to the schemes using hop-by-hop decoding and destination-decoding in terms of both packet delay and throughput. In addition, the simulation results show that CEDAR can achieve the optimal performance in most cases. Link Scheduling Problem. As for the link scheduling problem, we formulate a new problem called Fading-Resistant Link Scheduling (Fadin-R-LS) problem, which aims to maximize the throughput (the sum data rate) for all the links in a single time slot. The problem is different from the existing link scheduling problems by incorporating the Rayleigh-fading model to describe the interference. This model extends the deterministic interference model based on the Signal-to-Interference Ratio (SIR) using stochastic propagation to address fading effects in wireless networks. Based on the geometric structure of Fadin-R-LS, we then propose three centralized schemes for Fadin-R-LS, with O(g(L)), O(g(L)), and O(1) performance guarantee for packet latency, where g(L) is the number of length magnitudes of link set L. Furthermore, we propose a completely distributed approach based on game theory, which has O(g(L)^2\alpha) performance guarantee. Furthermore, we incorporate a cooperative communication (CC) technique, e.g., maximum ratio combining (MRC), into our system to further improve the throughput, in which receivers are allowed to combine messages from different senders to combat transmission errors. In particular, we formulate two problems named cooperative link scheduling problem (CLS) and one-shot cooperative link scheduling problem (OCLS). The first problem aims to find a schedule of links that uses the minimum number of time slots to inform all the receivers. The second problem aims to find a set of links that can inform the maximum number of receivers in one time slot. We prove both problems to be NP-hard. As a solution, we propose an algorithm for both CLS and OCLS with g(K) approximation ratio, where g(K) is so called the diversity of key links. In addition, we propose a greedy algorithm with O(1) approximation ratio for OCLS when the number of links for each receiver is upper bounded by a constant. In addition, we consider a special case for the link scheduling problem, where there is a group of vehicles forming a platoon and each vehicle in the platoon needs to communicate with the leader vehicle to get the leader vehicle\u27s velocity and location. By leveraging a typical feature of a platoon, we devise a link scheduling algorithm, called the Fast and Lightweight Autonomous link scheduling algorithm (FLA), in which each vehicle determines its own time slot simply based on its distance to the leader vehicle. Finally, we conduct a simulation on Matlab to evaluate the performance of our proposed methods. The experimental results demonstrate the superior performance of our link scheduling methods over the previous methods

    Quality-Aware Scheduling Algorithms in Renewable Sensor

    No full text
    Wireless sensor network has emerged as a key technology for various applications such as environmental sensing, structural health monitoring, and area surveillance. Energy is by far one of the most critical design hurdles that hinders the deployment of wireless sensor networks. The lifetime of traditional battery-powered sensor networks is limited by the capacities of batteries. Even many energy conservation schemes were proposed to address this constraint, the network lifetime is still inherently restrained, as the consumed energy cannot be replenished easily. Fully addressing this issue requires energy to be replenished quite often in sensor networks (renewable sensor networks). One viable solution to energy shortages is enabling each sensor to harvest renewable energy from its surroundings such as solar energy, wind energy, and so on. In comparison with their conventional counterparts, the network lifetime in renewable sensor networks is no longer a main issue, since sensors can be recharged repeatedly. This results in a research focus shift from the network lifetime maximization in traditional sensor networks to the network performance optimization (e.g., monitoring quality). This thesis focuses on these issues and tackles important problems in renewable sensor networks as follows. We first study the target coverage optimization in renewable sensor networks via sensor duty cycle scheduling, where a renewable sensor network consisting of a set of heterogeneous sensors and a stationary base station need to be scheduled to monitor a set of targets in a monitoring area (e.g., some critical facilities) for a specified period, by transmitting their sensing data to the base station through multihop relays in a real-time manner. We formulate a coverage maximization problem in a renewable sensor network which is to schedule sensor activities such that the monitoring quality is maximized, subject to that the communication network induced by the activated sensors and the base station at each time moment is connected. We approach the problem for a given monitoring period by adopting a general strategy. That is, we divide the entire monitoring period into equal numbers of time slots and perform sensor activation or inactivation scheduling in the beginning of each time slot. As the problem is NP-hard, we devise efficient offline centralized and distributed algorithms for it, provided that the amount of harvested energy of each sensor for a given monitoring period can be predicted accurately. Otherwise, we propose an online adaptive framework to handle energy prediction fluctuation for this monitoring period. We conduct extensive experiments, and the experimental results show that the proposed solutions are very promising. We then investigate the data collection optimization in renewable sensor networks by exploiting sink mobility, where a mobile sink travels around the sensing field to collect data from sensors through one-hop transmission. With one-hop transmission, each sensor could send data directly to the mobile sink without any relay, and thus no energy are consumed on forwarding packets for others which is more energy efficient in comparison with multi-hop relays. Moreover, one-hop transmission particularly is very useful for a disconnected network, which may be due to the error-prone nature of wireless communication or the physical limit (e.g., some sensors are physically isolated), while multi-hop transmission is not applicable. In particular, we investigate two different kinds of mobile sinks, and formulate optimization problems under different scenarios, for which both centralized and distributed solutions are proposed accordingly. We study the performance of the proposed solutions and validate their effectiveness in improving the data quality. Since the energy harvested often varies over time, we also consider the scenario of renewable sensor networks by utilizing wireless energy transfer technology, where a mobile charging vehicle periodically travels inside the sensing field and charges sensors without any plugs or wires. Specifically, we propose a novel charging paradigm and formulate an optimization problem with an objective of maximizing the number of sensors charged per tour. We devise an offline approximation algorithm which runs in quasi-polynomial time and develop efficient online sensor charging algorithms, by considering the dynamic behaviors of sensors’ various sensing and transmission activities. To study the efficiency of the proposed algorithms, we conduct extensive experiments and the experimental results demonstrate that the proposed algorithms are very efficient. We finally conclude our work and discuss potential research topics which derive from the studies of this thesis

    A distributed approach for robust, scalable, and flexible dynamic ridesharing

    Get PDF
    This dissertation provides a solution to dynamic ridesharing problem, a NP-hard optimization problem, where a fleet of vehicles move on a road network and ridesharing requests arrive continuously. The goal is to optimally assign vehicles to requests with the objective of minimizing total travel distance of vehicles and satisfying constraints such as vehicles’ capacity and time window for pick-up and drop-off locations. The dominant approach for solving dynamic ridesharing problem is centralized approach that is intractable when size of the problem grows, thus not scalable. To address scalability, a novel agent-based representation of the problem, along with a set of algorithms to solve the problem, is proposed. Besides being scalable, the proposed approach is flexible and, compared to centralized approach, more robust, i.e., vehicle agents can handle changes in the network dynamically (e.g., in case of a vehicle breakdown) without need to re-start the operation, and individual vehicle failure will not affect the process of decision-making, respectively. In the decentralized approach the underlying combinatorial optimization is formulated as a distributed optimization problem and is decomposed into multiple subproblems using spectral graph theory. Each subproblem is formulated as DCOP (Distributed Constraint Optimization Problem) based on a factor graph representation, including a group of cooperative agents that work together to take an optimal (or near-optimal) joint action. Then a min-sum algorithm is used on the factor graph to solve the DCOP. A simulator is implemented to empirically evaluate the proposed approach and benchmark it against two alternative approaches, solutions obtained by ILP (Integer Linear Programming) and a greedy heuristic algorithm. The results show that the decentralized approach scales well with different number of vehicle agents, capacity of vehicle agents, and number of requests and outperforms: (a) the greedy heuristic algorithm in terms of solution quality and (b) the ILP in terms of execution time

    Flow Assignment and Processing on a Distributed Edge Computing Platform

    Get PDF
    The evolution of telecommunication networks toward the fifth generation of mobile services (5G), along with the increasing presence of cloud-native applications, and the development of Cloud and Mobile Edge Computing (MEC) paradigms, have opened up new opportunities for the monitoring and management of logistics and transportation. We address the case of distributed streaming platforms with multiple message brokers to develop an optimization model for the real-time assignment and load balancing of event streaming generated data traffic among Edge Computing facilities. The performance indicator function to be optimised is derived by adopting queuing models with different granularity (packet- and flow-level) that are suitably combined. A specific use case concerning a logistics application is considered and numerical results are provided to show the effectiveness of the optimisation procedure, also in comparison to a “static” assignment proportional to the processing speed of the brokers
    • …
    corecore