49 research outputs found

    Improving Energy Efficiency and Security for Pervasive Computing Systems

    Get PDF
    Pervasive computing systems are comprised of various personal mobile devices connected by the wireless networks. Pervasive computing systems have gained soaring popularity because of the rapid proliferation of the personal mobile devices. The number of personal mobile devices increased steeply over years and will surpass world population by 2016.;However, the fast development of pervasive computing systems is facing two critical issues, energy efficiency and security assurance. Power consumption of personal mobile devices keeps increasing while the battery capacity has been hardly improved over years. at the same time, a lot of private information is stored on and transmitted from personal mobile devices, which are operating in very risky environment. as such, these devices became favorite targets of malicious attacks. Without proper solutions to address these two challenging problems, concerns will keep rising and slow down the advancement of pervasive computing systems.;We select smartphones as the representative devices in our energy study because they are popular in pervasive computing systems and their energy problem concerns users the most in comparison with other devices. We start with the analysis of the power usage pattern of internal system activities, and then identify energy bugs for improving energy efficiency. We also investigate into the external communication methods employed on smartphones, such as cellular networks and wireless LANs, to reduce energy overhead on transmissions.;As to security, we focus on implantable medical devices (IMDs) that are specialized for medical purposes. Malicious attacks on IMDs may lead to serious damages both in the cyber and physical worlds. Unlike smartphones, simply borrowing existing security solutions does not work on IMDs because of their limited resources and high requirement of accessibility. Thus, we introduce an external device to serve as the security proxy for IMDs and ensure that IMDs remain accessible to save patients\u27 lives in certain emergency situations when security credentials are not available

    Experimenting with commodity 802.11 hardware: overview and future directions

    Get PDF
    The huge adoption of 802.11 technologies has triggered a vast amount of experimentally-driven research works. These works range from performance analysis to protocol enhancements, including the proposal of novel applications and services. Due to the affordability of the technology, this experimental research is typically based on commercial off-the-shelf (COTS) devices, and, given the rate at which 802.11 releases new standards (which are adopted into new, affordable devices), the field is likely to continue to produce results. In this paper, we review and categorise the most prevalent works carried out with 802.11 COTS devices over the past 15 years, to present a timely snapshot of the areas that have attracted the most attention so far, through a taxonomy that distinguishes between performance studies, enhancements, services, and methodology. In this way, we provide a quick overview of the results achieved by the research community that enables prospective authors to identify potential areas of new research, some of which are discussed after the presentation of the survey.This work has been partly supported by the European Community through the CROWD project (FP7-ICT-318115) and by the Madrid Regional Government through the TIGRE5-CM program (S2013/ICE-2919).Publicad

    Energy efficient offloading techniques for heterogeneous networks

    Get PDF
    Mobile data offloading has been proposed as a solution for the network congestion problem that is continuously aggravating due to the increase in mobile data demand. The concept of offloading refers to the exploitation of network heterogeneity with the objective to mitigate the load of the cellular network infrastructure. In this thesis a multicast protocol for short range networks that exploits the characteristics of physical layer network coding is presented. In the proposed protocol, named CooPNC, a novel cooperative approach is provided that allows collision resolutions with the use of an indirect inter-network cooperation scheme. Through this scheme, a reliable multicast protocol for partially overlapping short range networks with low control overhead is provided. It is shown that with CooPNC, higher throughput and energy efficiency are achieved, while it presents lower delay compared to state-of-the-art multicast protocols. A detailed description of the proposed protocol is provided, with a simple scenario of overlapping networks and also for a generalised scalable scenario. Through mathematical analysis and simulations it is proved that CooPNC presents significant performance gains compared to other state-of-the-art multicast protocols for short range networks. In order to reveal the performance bounds of Physical Layer Network Coding, the so-called Cross Network is investigated under diverse Network Coding (NC) techniques. The impact of Medium Access Control (MAC) layer fairness on the throughput performance of the network is provided, for the cases of pure relaying, digital NC with and without overhearing and physical layer NC with and without overhearing. A comparison among these techniques is presented and the throughput bounds, caused by MAC layer limitations, are discussed. Furthermore, it is shown that significant coding gains are achieved with digital and physical layer NC and the energy efficiency performance of each NC case is presented, when applied on the Cross Network.In the second part of this thesis, the uplink offloading using IP Flow Mobility (IFOM) is also investigated. IFOM allows a LTE mobile User Equipment (UE) to maintain two concurrent data streams, one through LTE and the other through WiFi access technology, that presents uplink limitations due to the inherent fairness design of IEEE 802.11 DCF. To overcome these limitations, a weighted proportionally fair bandwidth allocation algorithm is proposed, regarding the data volume that is being offloaded through WiFi, in conjunction with a pricing-based rate allocation algorithm for the rest of the data volume needs of the UEs that are transmitted through the LTE uplink. With the proposed approach, the energy efficiency of the UEs is improved, and the offloaded data volume is increased under the concurrent use of access technologies that IFOM allows. In the weighted proportionally fair WiFi bandwidth allocation, both the different upload data needs of the UEs, along with their LTE spectrum efficiency are considered, and an access mechanism is proposed that improves the use of WiFi access in uplink offloading. In the LTE part, a two-stage pricing-based rate allocation is proposed, under both linear and exponential pricing approaches, with the objective to satisfy all offloading UEs regarding their LTE uplink access. The existence of a malicious UE is also considered that aims to exploit the WiFi bandwidth against its peers in order to upload less data through the energy demanding LTE uplink and a reputation based method is proposed to combat its selfish operation. This approach is theoretically analysed and its performance is evaluated, regarding the malicious and the truthful UEs in terms of energy efficiency. It is shown that while the malicious UE presents better energy efficiency before being detected, its performance is significantly degraded with the proposed reaction method.La derivación del tráfico de datos móviles (en inglés data offloading) ha sido propuesta como una solución al problema de la congestión de la red, un problema que empeora continuamente debido al incremento de la demanda de datos móviles. El concepto de offloading se entiende como la explotación de la heterogeneidad de la red con el objetivo de mitigar la carga de la infraestructura de las redes celulares. En esta tesis se presenta un protocolo multicast para redes de corto alcance (short range networks) que explota las características de la codificación de red en la capa física (physical layer network coding). En el protocolo propuesto, llamado CooPMC, se implementa una solución cooperativa que permite la resolución de colisiones mediante la utilización de un esquema indirecto de cooperación entre redes. Gracias a este esquema, se consigue un protocolo multicast fiable i con poco overhead de control para redes de corto alcance parcialmente solapadas. Se demuestra que el protocolo CooPNC consigue una mayor tasa de transmisión neta (throughput) y una mejor eficiencia energética, a la vez que el retardo se mantiene por debajo del obtenido con los protocolos multicast del estado del arte. La tesis ofrece una descripción detallada del protocolo propuesto, tanto para un escenario simple de redes solapadas como también para un escenario general escalable. Se demuestra mediante análisis matemático y simulaciones que CooPNC ofrece mejoras significativas en comparación con los protocolos multicast para redes de corto alcance del estado del arte. Con el objetivo de encontrar los límites de la codificación de red en la capa física (physical layer network coding), se estudia el llamado Cross Network bajo distintas técnicas de Network Coding (NC). Se proporciona el impacto de la equidad (fairness) de la capa de control de acceso al medio (Medium Access Control, MAC), para los casos de repetidor puro (pure relaying), NC digital con y sin escucha del medio, y NC en la capa física con y sin escucha del medio. En la segunda parte de la tesis se investiga el offloading en el enlace ascendente mediante IP Flow Mobility (IFOM). El IFOM permite a los usuarios móviles de LTE mantener dos flujos de datos concurrentes, uno a través de LTE y el otro a través de la tecnología de acceso WiFi, que presenta limitaciones en el enlace ascendente debido a la equidad (fairness) inherente del diseño de IEEE 802.11 DCF. Para superar estas limitaciones, se propone un algoritmo proporcional ponderado de asignación de banda para el volumen de datos derivado a través de WiFi, junto con un algoritmo de asignación de tasa de transmisión basado en pricing para el volumen de datos del enlace ascendente de LTE. Con la solución propuesta, se mejora la eficiencia energética de los usuarios móviles, y se incrementa el volumen de datos que se pueden derivar gracias a la utilización concurrente de tecnologías de acceso que permite IFOM. En el algoritmo proporcional ponderado de asignación de banda de WiFi, se toman en consideración tanto las distintas necesidades de los usuarios en el enlace ascendente como su eficiencia espectral en LTE, y se propone un mecanismo de acceso que mejora el uso de WiFi para el tráfico derivado en el enlace ascendente. En cuanto a la parte de LTE, se propone un algoritmo en dos etapas de asignación de tasa de transmisión basada en pricing (con propuestas de pricing exponencial y lineal) con el objetivo de satisfacer el enlace ascendente de los usuarios en LTE. También se contempla la existencia de usuarios maliciosos, que pretenden utilizar el ancho de banda WiFi contra sus iguales para transmitir menos datos a través del enlace ascendente de LTE (menos eficiente energéticamente). Para ello se propone un método basado en la reputación que combate el funcionamiento egoísta (selfish).Postprint (published version

    COIN@AAMAS2015

    Get PDF
    COIN@AAMAS2015 is the nineteenth edition of the series and the fourteen papers included in these proceedings demonstrate the vitality of the community and will provide the grounds for a solid workshop program and what we expect will be a most enjoyable and enriching debate.Peer reviewe

    Code offloading in opportunistic computing

    Get PDF
    With the advent of cloud computing, applications are no longer tied to a single device, but they can be migrated to a high-performance machine located in a distant data center. The key advantage is the enhancement of performance and consequently, the users experience. This activity is commonly referred computational offloading and it has been strenuously investigated in the past years. The natural candidate for computational offloading is the cloud, but recent results point out the hidden costs of cloud reliance in terms of latency and energy; Cuervo et. al. illustrates the limitations on cloud-based computational offloading based on WANs latency times. The dissertation confirms the results of Cuervo et. al. and illustrates more use cases where the cloud may not be the right choice. This dissertation addresses the following question: is it possible to build a novel approach for offloading the computation that overcomes the limitations of the state-of-the-art? In other words, is it possible to create a computational offloading solution that is able to use local resources when the Cloud is not usable, and remove the strong bond with the local infrastructure? To this extent, I propose a novel paradigm for computation offloading named anyrun computing, whose goal is to use any piece of higher-end hardware (locally or remotely accessible) to offloading a portion of the application. With anyrun computing I removed the boundaries that tie the solution to an infrastructure by adding locally available devices to augment the chances to succeed in offloading. To achieve the goals of the dissertation it is fundamental to have a clear view of all the steps that take part in the offloading process. To this extent, I firstly provided a categorization of such activities combined with their interactions and assessed the impact on the system. The outcome of the analysis is the mapping to the problem to a combinatorial optimization problem that is notoriously known to be NP-Hard. There are a set of well-known approaches to solving such kind of problems, but in this scenario, they cannot be used because they require a global view that can be only maintained by a centralized infrastructure. Thus, local solutions are needed. Moving further, to empirically tackle the anyrun computing paradigm, I propose the anyrun computing framework (ARC), a novel software framework whose objective is to decide whether to offload or not to any resource-rich device willing to lend assistance is advantageous compared to local execution with respect to a rich array of performance dimensions. The core of ARC is the nference nodel which receives a rich set of information about the available remote devices from the SCAMPI opportunistic computing framework developed within the European project SCAMPI, and employs the information to profile a given device, in other words, it decides whether offloading is advantageous compared to local execution, i.e. whether it can reduce the local footprint compared to local execution in the dimensions of interest (CPU and RAM usage, execution time, and energy consumption). To empirically evaluate ARC I presented a set of experimental results on the cloud, cloudlet, and opportunistic domain. In the cloud domain, I used the state of the art in cloud solutions over a set of significant benchmark problems and with three WANs access technologies (i.e. 3G, 4G, and high-speed WAN). The main outcome is that the cloud is an appealing solution for a wide variety of problems, but there is a set of circumstances where the cloud performs poorly. Moreover, I have empirically shown the limitations of cloud-based approaches, specifically, In some circumstances, problems with high transmission costs tend to perform poorly, unless they have high computational needs. The second part of the evaluation is done in opportunistic/cloudlet scenarios where I used my custom-made testbed to compare ARC and MAUI, the state of the art in computation offloading. To this extent, I have performed two distinct experiments: the first with a cloudlet environment and the second with an opportunistic environment. The key outcome is that ARC virtually matches the performances of MAUI (in terms of energy savings) in cloudlet environment, but it improves them by a 50% to 60% in the opportunistic domain

    Transport Architectures for an Evolving Internet

    Get PDF
    In the Internet architecture, transport protocols are the glue between an application’s needs and the network’s abilities. But as the Internet has evolved over the last 30 years, the implicit assumptions of these protocols have held less and less well. This can cause poor performance on newer networks—cellular networks, datacenters—and makes it challenging to roll out networking technologies that break markedly with the past. Working with collaborators at MIT, I have built two systems that explore an objective-driven, computer-generated approach to protocol design. My thesis is that making protocols a function of stated assumptions and objectives can improve application performance and free network technologies to evolve. Sprout, a transport protocol designed for videoconferencing over cellular networks, uses probabilistic inference to forecast network congestion in advance. On commercial cellular networks, Sprout gives 2-to-4 times the throughput and 7-to-9 times less delay than Skype, Apple Facetime, and Google Hangouts. This work led to Remy, a tool that programmatically generates protocols for an uncertain multi-agent network. Remy’s computer-generated algorithms can achieve higher performance and greater fairness than some sophisticated human-designed schemes, including ones that put intelligence inside the network. The Remy tool can then be used to probe the difficulty of the congestion control problem itself—how easy is it to “learn” a network protocol to achieve desired goals, given a necessarily imperfect model of the networks where it ultimately will be deployed? We found weak evidence of a tradeoff between the breadth of the operating range of a computer-generated protocol and its performance, but also that a single computer-generated protocol was able to outperform existing schemes over a thousand-fold range of link rates

    RNA interference and heterochromatin formation in fission yeast

    Get PDF

    Next-Generation Public Safety Systems Based on Autonomous Vehicles and Opportunistic Communications

    Get PDF
    An emergency scenario is characterized by the unpredictability of the environment conditions and by the scarcity of the available communication infrastructures. After a natural or human disaster, the main public and private infrastructures are partially damaged or totally destroyed. These infrastructures include roads, bridges, water supplies, electrical grids, telecommunications and so on. In these conditions, the first rescue operations executed by the public safety organizations can be very difficult, due to the unpredictability of the disaster area environment and the lack in the communications systems. The aim of this work is to introduce next-generation public safety systems where the main focus is the use of unmanned vehicles that are able to exploit the self-organizing characteristics of such autonomous systems. With the proposed public safety systems, a team of autonomous vehicles will be able to overcome the hazardous environments of a post disaster scenario by introducing a temporary dynamic network infrastructure which enables the first responders to cooperate and to communicate with the victims involved. Furthermore, given the pervasive penetration of smart end-user devices, the emergence of spontaneous networks could constitute promising solutions to implement emergency communication systems. With these systems the survivors will be able to self-organize in a communication network that allows them to send alerts and information messages towards the rescue teams, even in absence of communication infrastructures

    Transport Architectures for an Evolving Internet

    Get PDF
    In the Internet architecture, transport protocols are the glue between an application’s needs and the network’s abilities. But as the Internet has evolved over the last 30 years, the implicit assumptions of these protocols have held less and less well. This can cause poor performance on newer networks—cellular networks, datacenters—and makes it challenging to roll out networking technologies that break markedly with the past. Working with collaborators at MIT, I have built two systems that explore an objective-driven, computer-generated approach to protocol design. My thesis is that making protocols a function of stated assumptions and objectives can improve application performance and free network technologies to evolve. Sprout, a transport protocol designed for videoconferencing over cellular networks, uses probabilistic inference to forecast network congestion in advance. On commercial cellular networks, Sprout gives 2-to-4 times the throughput and 7-to-9 times less delay than Skype, Apple Facetime, and Google Hangouts. This work led to Remy, a tool that programmatically generates protocols for an uncertain multi-agent network. Remy’s computer-generated algorithms can achieve higher performance and greater fairness than some sophisticated human-designed schemes, including ones that put intelligence inside the network. The Remy tool can then be used to probe the difficulty of the congestion control problem itself—how easy is it to “learn” a network protocol to achieve desired goals, given a necessarily imperfect model of the networks where it ultimately will be deployed? We found weak evidence of a tradeoff between the breadth of the operating range of a computer-generated protocol and its performance, but also that a single computer-generated protocol was able to outperform existing schemes over a thousand-fold range of link rates
    corecore