1,118 research outputs found
Two-stage wireless network emulation
Testing and deploying mobile wireless networks and applications are very challenging tasks, due to the network size and administration as well as node mobility management. Well known simulation tools provide a more flexible environment but they do not run in real time and they rely on models of the developed system rather than on the system itself. Emulation is a hybrid approach allowing real application and traffic to be run over a simulated network, at the expense of accuracy when the number of nodes is too important. In this paper, emulation is split in two stages : first, the simulation of network conditions is precomputed so that it does not undergo real-time constraints that decrease its accuracy ; second, real applications and traffic are run on an emulation platform where the precomputed events are scheduled in soft real-time. This allows the use of accurate models for node mobility, radio signal propagation and communication stacks. An example shows that a simple situation can be simply tested with real applications and traffic while relying on accurate models. The consistency between the simulation results and the emulated conditions is also illustrated
Code offloading in opportunistic computing
With the advent of cloud computing, applications are no longer tied to a single device, but they can be migrated to a high-performance machine located in a distant data center. The key advantage is the enhancement of performance and consequently, the users experience. This activity is commonly referred computational offloading and it has been strenuously investigated in the past years. The natural candidate for computational offloading is the cloud, but recent results point out the hidden costs of cloud reliance in terms of latency and energy; Cuervo et. al. illustrates the limitations on cloud-based computational offloading based on WANs latency times. The dissertation confirms the results of Cuervo et. al. and illustrates more use cases where the cloud may not be the right choice. This dissertation addresses the following question: is it possible to build a novel approach for offloading the computation that overcomes the limitations of the state-of-the-art? In other words, is it possible to create a computational offloading solution that is able to use local resources when the Cloud is not usable, and remove the strong bond with the local infrastructure? To this extent, I propose a novel paradigm for computation offloading named anyrun computing, whose goal is to use any piece of higher-end hardware (locally or remotely accessible) to offloading a portion of the application. With anyrun computing I removed the boundaries that tie the solution to an infrastructure by adding locally available devices to augment the chances to succeed in offloading. To achieve the goals of the dissertation it is fundamental to have a clear view of all the steps that take part in the offloading process. To this extent, I firstly provided a categorization of such activities combined with their interactions and assessed the impact on the system. The outcome of the analysis is the mapping to the problem to a combinatorial optimization problem that is notoriously known to be NP-Hard. There are a set of well-known approaches to solving such kind of problems, but in this scenario, they cannot be used because they require a global view that can be only maintained by a centralized infrastructure. Thus, local solutions are needed. Moving further, to empirically tackle the anyrun computing paradigm, I propose the anyrun computing framework (ARC), a novel software framework whose objective is to decide whether to offload or not to any resource-rich device willing to lend assistance is advantageous compared to local execution with respect to a rich array of performance dimensions. The core of ARC is the nference nodel which receives a rich set of information about the available remote devices from the SCAMPI opportunistic computing framework developed within the European project SCAMPI, and employs the information to profile a given device, in other words, it decides whether offloading is advantageous compared to local execution, i.e. whether it can reduce the local footprint compared to local execution in the dimensions of interest (CPU and RAM usage, execution time, and energy consumption). To empirically evaluate ARC I presented a set of experimental results on the cloud, cloudlet, and opportunistic domain. In the cloud domain, I used the state of the art in cloud solutions over a set of significant benchmark problems and with three WANs access technologies (i.e. 3G, 4G, and high-speed WAN). The main outcome is that the cloud is an appealing solution for a wide variety of problems, but there is a set of circumstances where the cloud performs poorly. Moreover, I have empirically shown the limitations of cloud-based approaches, specifically, In some circumstances, problems with high transmission costs tend to perform poorly, unless they have high computational needs. The second part of the evaluation is done in opportunistic/cloudlet scenarios where I used my custom-made testbed to compare ARC and MAUI, the state of the art in computation offloading. To this extent, I have performed two distinct experiments: the first with a cloudlet environment and the second with an opportunistic environment. The key outcome is that ARC virtually matches the performances of MAUI (in terms of energy savings) in cloudlet environment, but it improves them by a 50% to 60% in the opportunistic domain
Enforcing Service Availability in Mobile Ad-Hoc WANs
In this paper, we address the problem of service availability in mobile ad-hoc WANs. We present a secure mechanism to stimulate end users to keep their devices turned on, to refrain from overloading the network, and to thwart tampering aimed at converting the device into a ``selfish`` one. Our solution is based on the application of a tamper resistant security module in each device and cryptographic protection of messages
Observation-based Cooperation Enforcement in Ad Hoc Networks
Ad hoc networks rely on the cooperation of the nodes participating in the
network to forward packets for each other. A node may decide not to cooperate
to save its resources while still using the network to relay its traffic. If
too many nodes exhibit this behavior, network performance degrades and
cooperating nodes may find themselves unfairly loaded. Most previous efforts to
counter this behavior have relied on further cooperation between nodes to
exchange reputation information about other nodes. If a node observes another
node not participating correctly, it reports this observation to other nodes
who then take action to avoid being affected and potentially punish the bad
node by refusing to forward its traffic. Unfortunately, such second-hand
reputation information is subject to false accusations and requires maintaining
trust relationships with other nodes. The objective of OCEAN is to avoid this
trust-management machinery and see how far we can get simply by using direct
first-hand observations of other nodes' behavior. We find that, in many
scenarios, OCEAN can do as well as, or even better than, schemes requiring
second-hand reputation exchanges. This encouraging result could possibly help
obviate solutions requiring trust-management for some contexts.Comment: 10 pages, 7 figure
Recommended from our members
A connection-level call admission control using genetic algorithm for MultiClass multimedia services in wireless networks
Call admission control in a wireless cell in a personal communication system (PCS) can be modeled as an M/M/C/C queuing system with m classes of users. Semi-Markov Decision Process (SMDP) can be used to optimize channel utilization with upper bounds on handoff blocking probabilities as Quality of Service constraints. However, this method is too time-consuming and therefore it fails when state space and action space are large. In this paper, we apply a genetic algorithm approach to address the situation when the SMDP approach fails. We code call admission control decisions as binary strings, where a value of â1â in the position i (i=1,âŠm) of a decision string stands for the decision of accepting a call in class-i; a value of â0â in the position i of the decision string stands for the decision of rejecting a call in class-i. The coded binary strings are feed into the genetic algorithm, and the resulting binary strings are founded to be near optimal call admission control decisions. Simulation results from the genetic algorithm are compared with the optimal solutions obtained from linear programming for the SMDP approach. The results reveal that the genetic algorithm approximates the optimal approach very well with less complexity
- âŠ