38,446 research outputs found

    Keep Your Nice Friends Close, but Your Rich Friends Closer -- Computation Offloading Using NFC

    Full text link
    The increasing complexity of smartphone applications and services necessitate high battery consumption but the growth of smartphones' battery capacity is not keeping pace with these increasing power demands. To overcome this problem, researchers gave birth to the Mobile Cloud Computing (MCC) research area. In this paper we advance on previous ideas, by proposing and implementing the first known Near Field Communication (NFC)-based computation offloading framework. This research is motivated by the advantages of NFC's short distance communication, with its better security, and its low battery consumption. We design a new NFC communication protocol that overcomes the limitations of the default protocol; removing the need for constant user interaction, the one-way communication restraint, and the limit on low data size transfer. We present experimental results of the energy consumption and the time duration of two computationally intensive representative applications: (i) RSA key generation and encryption, and (ii) gaming/puzzles. We show that when the helper device is more powerful than the device offloading the computations, the execution time of the tasks is reduced. Finally, we show that devices that offload application parts considerably reduce their energy consumption due to the low-power NFC interface and the benefits of offloading.Comment: 9 pages, 4 tables, 13 figure

    Deliverable DJRA1.2. Solutions and protocols proposal for the network control, management and monitoring in a virtualized network context

    Get PDF
    This deliverable presents several research proposals for the FEDERICA network, in different subjects, such as monitoring, routing, signalling, resource discovery, and isolation. For each topic one or more possible solutions are elaborated, explaining the background, functioning and the implications of the proposed solutions.This deliverable goes further on the research aspects within FEDERICA. First of all the architecture of the control plane for the FEDERICA infrastructure will be defined. Several possibilities could be implemented, using the basic FEDERICA infrastructure as a starting point. The focus on this document is the intra-domain aspects of the control plane and their properties. Also some inter-domain aspects are addressed. The main objective of this deliverable is to lay great stress on creating and implementing the prototype/tool for the FEDERICA slice-oriented control system using the appropriate framework. This deliverable goes deeply into the definition of the containers between entities and their syntax, preparing this tool for the future implementation of any kind of algorithm related to the control plane, for both to apply UPB policies or to configure it by hand. We opt for an open solution despite the real time limitations that we could have (for instance, opening web services connexions or applying fast recovering mechanisms). The application being developed is the central element in the control plane, and additional features must be added to this application. This control plane, from the functionality point of view, is composed by several procedures that provide a reliable application and that include some mechanisms or algorithms to be able to discover and assign resources to the user. To achieve this, several topics must be researched in order to propose new protocols for the virtual infrastructure. The topics and necessary features covered in this document include resource discovery, resource allocation, signalling, routing, isolation and monitoring. All these topics must be researched in order to find a good solution for the FEDERICA network. Some of these algorithms have started to be analyzed and will be expanded in the next deliverable. Current standardization and existing solutions have been investigated in order to find a good solution for FEDERICA. Resource discovery is an important issue within the FEDERICA network, as manual resource discovery is no option, due to scalability requirement. Furthermore, no standardization exists, so knowledge must be obtained from related work. Ideally, the proposed solutions for these topics should not only be adequate specifically for this infrastructure, but could also be applied to other virtualized networks.Postprint (published version

    A Protocol for the Atomic Capture of Multiple Molecules at Large Scale

    Get PDF
    With the rise of service-oriented computing, applications are more and more based on coordination of autonomous services. Envisioned over largely distributed and highly dynamic platforms, expressing this coordination calls for alternative programming models. The chemical programming paradigm, which models applications as chemical solutions where molecules representing digital entities involved in the computation, react together to produce a result, has been recently shown to provide the needed abstractions for autonomic coordination of services. However, the execution of such programs over large scale platforms raises several problems hindering this paradigm to be actually leveraged. Among them, the atomic capture of molecules participating in concur- rent reactions is one of the most significant. In this paper, we propose a protocol for the atomic capture of these molecules distributed and evolving over a large scale platform. As the density of possible reactions is crucial for the liveness and efficiency of such a capture, the protocol proposed is made up of two sub-protocols, each of them aimed at addressing different levels of densities of potential reactions in the solution. While the decision to choose one or the other is local to each node participating in a program's execution, a global coherent behaviour is obtained. Proof of liveness, as well as intensive simulation results showing the efficiency and limited overhead of the protocol are given.Comment: 13th International Conference on Distributed Computing and Networking (2012

    Panda: Neighbor Discovery on a Power Harvesting Budget

    Full text link
    Object tracking applications are gaining popularity and will soon utilize Energy Harvesting (EH) low-power nodes that will consume power mostly for Neighbor Discovery (ND) (i.e., identifying nodes within communication range). Although ND protocols were developed for sensor networks, the challenges posed by emerging EH low-power transceivers were not addressed. Therefore, we design an ND protocol tailored for the characteristics of a representative EH prototype: the TI eZ430-RF2500-SEH. We present a generalized model of ND accounting for unique prototype characteristics (i.e., energy costs for transmission/reception, and transceiver state switching times/costs). Then, we present the Power Aware Neighbor Discovery Asynchronously (Panda) protocol in which nodes transition between the sleep, receive, and transmit states. We analyze \name and select its parameters to maximize the ND rate subject to a homogeneous power budget. We also present Panda-D, designed for non-homogeneous EH nodes. We perform extensive testbed evaluations using the prototypes and study various design tradeoffs. We demonstrate a small difference (less then 2%) between experimental and analytical results, thereby confirming the modeling assumptions. Moreover, we show that Panda improves the ND rate by up to 3x compared to related protocols. Finally, we show that Panda-D operates well under non-homogeneous power harvesting

    SDDV: scalable data dissemination in vehicular ad hoc networks

    Get PDF
    An important challenge in the domain of vehicular ad hoc networks (VANET) is the scalability of data dissemination. Under dense traffic conditions, the large number of communicating vehicles can easily result in a congested wireless channel. In that situation, delays and packet losses increase to a level where the VANET cannot be applied for road safety applications anymore. This paper introduces scalable data dissemination in vehicular ad hoc networks (SDDV), a holistic solution to this problem. It is composed of several techniques spread across the different layers of the protocol stack. Simulation results are presented that illustrate the severity of the scalability problem when applying common state-of-the-art techniques and parameters. Starting from such a baseline solution, optimization techniques are gradually added to SDDV until the scalability problem is entirely solved. Besides the performance evaluation based on simulations, the paper ends with an evaluation of the final SDDV configuration on real hardware. Experiments including 110 nodes are performed on the iMinds w-iLab.t wireless lab. The results of these experiments confirm the results obtained in the corresponding simulations

    Evaluation study of IEEE 1609.4 performance for safety and non-safety messages dissemination

    Get PDF
    The IEEE 1609.4 was developed to support multi-channel operation and channel switching procedure in order to provide both safety and non-safety vehicular applications. However, this protocol has some drawback because it does not make efficient usage of channel bandwidth resources for single radio WAVE devices and suffer from high bounded delay and lost packet especially for large-scale networks in terms of the number of active nodes. This paper evaluates IEEE 1609.4 multi-channel protocol performance for safety and non-safety application and compare it with the IEEE 802.11p single channel protocol. Multi-channel and single channel protocols are analyzed in different environments to investigate their performance. By relying on a realistic dataset and using OMNeT++ simulation tool as network simulator, SUMO as traffic simulator and coupling them by employing Veins framework. Performance evaluation results show that the delay of single channel protocol IEEE 802.11p has been degraded 36% compared with multi-channel protocol

    Service level agreement framework for differentiated survivability in GMPLS-based IP-over-optical networks

    Get PDF
    In the next generation optical internet, GMPLS based IP-over-optical networks, ISPs will be required to support a wide variety of applications each having their own requirements. These requirements are contracted by means of the SLA. This paper describes a recovery framework that may be included in the SLA contract between ISP and customers in order to provide the required level of survivability. A key concern with such a recovery framework is how to present the different survivability alternatives including recovery techniques, failure scenario and layered integration into a transparent manner for customers. In this paper, two issues are investigated. First, the performance of the recovery framework when applying a proposed mapping procedure as an admission control mechanism in the edge router considering a smart-edge simple-core GMPLS-based IP/WDM network is considered. The second issue pertains to the performance of a pre-allocated restoration and its ability to provide protected connections under different failure scenarios
    • 

    corecore