72 research outputs found

    Performance comparison of baseline routing protocols in pocket switched network

    Get PDF
    Pocket Switched Network (PSN) is a branch of Delay Tolerant Network (DTN) which is intended to work in a challenged network. Challenged network is network with lack of infrastructure such as disaster area. As such, the network has intermittent connectivity. PSN provides a new paradigm to distribute messages in the network by taking advantage of roaming nodes from one place to another. In this paper, network performances of eight PSN routing protocols are investigated namely, First Contact, Direct Delivery, Epidemic, PRotocol using History of Encounter and Transitivity (PRoPHET), Spray and Wait, Binary Spray and Wait, Fuzzy Spray, Adaptive Fuzzy Spray and Wait. The performance metrics are packet delivery ratio, overhead ratio and average latency. Opportunistic Network Environment (ONE) simulator is used to evaluate the network performance. Experiments show that Epidemic has the best performance in term of message delivery ratio, but it has the highest overhead ratio. Direct Delivery has the lowest overhead ratio (zero overhead ratio) and PRoPHET has the lowest latency average

    Systems-Level Support for Mobile Device Connectivity.

    Full text link
    The rise of handheld computing devices has inspired a great deal of research aimed at addressing the unique problems posed by their mobile, "always-on" nature. In order to help mobile devices navigate a complex world of overlapping, uneven public wireless coverage, one must be mindful of the distinction between nomadic usage and true mobility. Accordingly, systems research must move beyond simply optimizing for a set of local conditions (e.g., finding the best access point for a laptop user in a stationary location) to considering the "derivative of connectivity" when network conditions are constantly in flux. This dissertation presents a new paradigm for networking support on mobile devices. This project has several complementary aspects. As devices encounter network connectivity our system both evaluates the application-level quality of WiFi access points and updates a device-centric mobility model. Together, this mobility model and AP quality database yield "connectivity forecasts," which let applications optimize not just for current network conditions but for the expected big picture to come. Results of a prototype deployment in several cities shows that considering the application-level quality of APs (rather than just signal strength) significantly boosts the success rate of finding a usable access point. Furthermore, this dissertation shows how connectivity forecasts---even with minimal model training time---allow several applications commonly found on mobile devices to reap significant benefits, such as extended battery life. Mobile devices are often within range of multiple connectivity options, however, and choosing just one therefore ignores potential connectivity. This dissertation describes a virtual link layer for Linux, called Juggler, that uses one network card to simultaneously associate with many WiFi APs, ad hoc groups or mesh networks. The results show how Juggler can boost effective bandwidth by striping data across multiple APs, enable seamless 802.11 handoff by preemptively associating with the "next" AP before the current one become unusable, and maintain a modest side-channel to the user's personal area network or mesh network without impacting foreground bandwidth to infrastructure.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/61718/1/tonynich_1.pd

    Mobility models, mobile code offloading, and p2p networks of smartphones on the cloud

    Get PDF
    It was just a few years ago when I bought my first smartphone. And now, (almost) all of my friends possess at least one of these powerful devices. International Data Corporation (IDC) reports that smartphone sales showed strong growth worldwide in 2011, with 491.4 million units sold – up to 61.3 percent from 2010. Furthermore, IDC predicts that 686 million smartphones will be sold in 2012, 38.4 percent of all handsets shipped. Silently, we are becoming part of a big mobile smartphone network, and it is amazing how the perception of the world is changing thanks to these small devices. If many years ago the birth of Internet enabled the possibility to be online, smartphones nowadays allow to be online all the time. Today we use smartphones to do many of the tasks we used to do on desktops, and many new ones. We browse the Internet, watch videos, upload data on social networks, use online banking, find our way by using GPS and online maps, and communicate in revolutionary ways. Along with these benefits, these fancy and exciting devices brought many challenges to the research area of mobile and distributed systems. One of the first problems that captured our attention was the study of the network that potentially could be created by interconnecting all the smartphones together. Typically, these devices are able to communicate with each other in short distances by using com- munication technologies such as Bluetooth or WiFi. The network paradigm that rises from this intermittent communication, also known as Pocket Switched Network (PSN) or Opportunistic Network ([10, 11]), is seen as a key technology to provide innovative services to the users without the need of any fixed infrastructure. In PSNs nodes are short range communicating devices carried by humans. Wireless communication links are created and dropped in time, depending on the physical distance of the device holders. From one side, social relations among humans yield recurrent movement patterns that help researchers design and build protocols that efficiently deliver messages to destinations ([12, 13, 14] among others). The complexity of these social relations, from the other side, makes it difficult to build simple mobility models, that in an efficient way, generate large synthetic mobility traces that look real. Traces that would be very valuable in protocol validation and that would replace the limited experimentally gathered data available so far. Traces that would serve as a common benchmark to researchers worldwide on which to validate existing and yet to be designed protocols. With this in mind we start our study with re-designing SWIM [15], an already exist- ing mobility model shown to generate traces with similar properties of that of existing real ones. We make SWIM able to easily generate large (small)-scale scenarios, starting from known small (large)-scale ones. To the best of our knowledge, this is the first such study. In addition, we study the social aspects of SWIM-generated traces. We show how to SWIM-generate a scenario in which a specific community structure of nodes is required. Finally, exploiting the scaling properties of SWIM, we present the first analysis of the scal- ing capabilities of several forwarding protocols such as Epidemic [16], Delegation [13], Spray&Wait [14], and BUBBLE [12]. The first results of these works appeared in [1], and, at the time of writing, [2] is accepted with minor revision. Next, we take into account the fact that in PSNs cannot be assumed full cooperation and fairness among nodes. Selfish behavior of individuals has to be considered, since it is an inherent aspect of humans, the device holders (see [17], [18]). We design a market-based mathematical framework that enables heterogeneous mobile users in an opportunistic mobile network to compromise optimally and efficiently on their QoS 3 demands. The goal of the framework is to satisfy each user with its achieved (lesser) QoS, and at the same time maximize the social welfare of users in the network. We base our study on the consideration that, in practice, users are generally tolerant on accepting lesser QoS guarantees than what they demand, with the degree of tolerance varying from user to user. This study is described in details in Chapter 2 of this dissertation, and is included in [3]. In general, QoS could be parameters such as response time, number of computations per unit time, allocated bandwidth, etc. Along the way toward our study of the smartphone-world, there was the big advent of mobile cloud computing—smartphones getting help from cloud-enabled services. Many researchers started believing that the cloud could help solving a crucial problem regarding smartphones: improve battery life. New generation apps are becoming very complex — gaming, navigation, video editing, augmented reality, speech recognition, etc., — which require considerable amount of power and energy, and as a result, smartphones suffer short battery lifetime. Unfortunately, as a consequence, mobile users have to continually upgrade their hardware to keep pace with increasing performance requirements but still experience battery problems. Many recent works have focused on building frameworks that enable mobile computation offloading to software clones of smartphones on the cloud (see [19, 20] among others), as well as to backup systems for data and applications stored in our devices [21, 22, 23]. However, none of these address dynamic and scalability features of execution on the cloud. These are very important problems, since users may request different computational power or backup space based on their workload and deadline for tasks. Considering this and advancing on previous works, we design, build, and implement the ThinkAir framework, which focuses on the elasticity and scalability of the server side and enhances the power of mobile cloud computing by parallelizing method execution using multiple Virtual Machine (VM) images. We evaluate the system using a range of benchmarks starting from simple micro-benchmarks to more complex applications. First, we show that the execution time and energy consumption decrease two orders of magnitude for the N-queens puzzle and one order of magnitude for a face detection and a virus scan application, using cloud offloading. We then show that a parallelizable application can invoke multiple VMs to execute in the cloud in a seamless and on-demand manner such as to achieve greater reduction on execution time and energy consumption. Finally, we use a memory-hungry image combiner tool to demonstrate that applications can dynamically request VMs with more computational power in order to meet their computational requirements. The details of the ThinkAir framework and its evaluation are described in Chapter 4, and are included in [6, 5]. Later on, we push the smartphone-cloud paradigm to a further level: We develop Clone2Clone (C2C), a distributed platform for cloud clones of smartphones. Along the way toward C2C, we study the performance of device-clones hosted in various virtualization environments in both private (local servers) and public (Amazon EC2) clouds. We build the first Amazon Customized Image (AMI) for Android-OS—a key tool to get reliable performance measures of mobile cloud systems—and show how it boosts up performance of Android images on the Amazon cloud service. We then design, build, and implement Clone2Clone, which associates a software clone on the cloud to every smartphone and in- terconnects the clones in a p2p fashion exploiting the networking service within the cloud. On top of C2C we build CloneDoc, a secure real-time collaboration system for smartphone users. We measure the performance of CloneDoc on a testbed of 16 Android smartphones and clones hosted on both private and public cloud services and show that C2C makes it possible to implement distributed execution of advanced p2p services in a network of mobile smartphones. The design and implementation of the Clone2Clone platform is included in [7], recently submitted to an international conference. We believe that Clone2Clone not only enables the execution of p2p applications in a network of smartphones, but it can also serve as a tool to solve critical security problems. In particular, we consider the problem of computing an efficient patching strategy to stop worm spreading between smartphones. We assume that the worm infects the devices and spreads by using bluetooth connections, emails, or any other form of communication used by the smartphones. The C2C network is used to compute the best strategy to patch the smartphones in such a way that the number of devices to patch is low (to reduce the load on the cellular infrastructure) and that the worm is stopped quickly. We consider two well defined worms, one spreading between the devices and one attacking the cloud before moving to the real smartphones. We describe CloudShield [8], a suite of protocols running on the peer-to-peer network of clones; and show by experiments with two different datasets (Facebook and LiveJournal) that CloudShield outperforms state-of-the-art worm-containment mechanisms for mobile wireless networks. This work is done in collaboration with Marco Valerio Barbera, PhD colleague in the same department, who contributed mainly in the implementation and testing of the malware spreading and patching strategies on the different datasets. The communication between the real devices and the cloud, necessary for mobile com- putation offloading and smartphone data backup, does certainly not come for free. To the best of our knowledge, none of the works related to mobile cloud computing explicitly studies the actual overhead in terms of bandwidth and energy to achieve full backup of both data/applications of a smartphone, as well as to keep, on the cloud, up-to-date clones of smartphones for mobile computation offload purposes. In the last work during my PhD—again, in collaboration with Marco Valerio Barbera—we studied the feasibility of both mobile computation offloading and mobile software/data backup in real-life scenarios. This joint work resulted in a recent publication [9] but is not included in this thesis. As in C2C, we assume an architecture where each real device is associated to a software clone on the cloud. We define two types of clones: The off-clone, whose purpose is to support computation offloading, and the back-clone, which comes to use when a restore of user’s data and apps is needed. We measure the bandwidth and energy consumption incurred in the real device as a result of the synchronization with the off-clone or the back-clone. The evaluation is performed through an experiment with 11 Android smartphones and an equal number of clones running on Amazon EC2. We study the data communication overhead that is necessary to achieve different levels of synchronization (once every 5min, 30min, 1h, etc.) between devices and clones in both the off-clone and back-clone case, and report on the costs in terms of energy incurred by each of these synchronization frequencies as well as by the respective communication overhead. My contribution in this work is focused mainly on the experimental setup, deployment, and data collection

    Mobility models, mobile code offloading, and p2p networks of smartphones on the cloud

    Get PDF
    It was just a few years ago when I bought my first smartphone. And now, (almost) all of my friends possess at least one of these powerful devices. International Data Corporation (IDC) reports that smartphone sales showed strong growth worldwide in 2011, with 491.4 million units sold – up to 61.3 percent from 2010. Furthermore, IDC predicts that 686 million smartphones will be sold in 2012, 38.4 percent of all handsets shipped. Silently, we are becoming part of a big mobile smartphone network, and it is amazing how the perception of the world is changing thanks to these small devices. If many years ago the birth of Internet enabled the possibility to be online, smartphones nowadays allow to be online all the time. Today we use smartphones to do many of the tasks we used to do on desktops, and many new ones. We browse the Internet, watch videos, upload data on social networks, use online banking, find our way by using GPS and online maps, and communicate in revolutionary ways. Along with these benefits, these fancy and exciting devices brought many challenges to the research area of mobile and distributed systems. One of the first problems that captured our attention was the study of the network that potentially could be created by interconnecting all the smartphones together. Typically, these devices are able to communicate with each other in short distances by using com- munication technologies such as Bluetooth or WiFi. The network paradigm that rises from this intermittent communication, also known as Pocket Switched Network (PSN) or Opportunistic Network ([10, 11]), is seen as a key technology to provide innovative services to the users without the need of any fixed infrastructure. In PSNs nodes are short range communicating devices carried by humans. Wireless communication links are created and dropped in time, depending on the physical distance of the device holders. From one side, social relations among humans yield recurrent movement patterns that help researchers design and build protocols that efficiently deliver messages to destinations ([12, 13, 14] among others). The complexity of these social relations, from the other side, makes it difficult to build simple mobility models, that in an efficient way, generate large synthetic mobility traces that look real. Traces that would be very valuable in protocol validation and that would replace the limited experimentally gathered data available so far. Traces that would serve as a common benchmark to researchers worldwide on which to validate existing and yet to be designed protocols. With this in mind we start our study with re-designing SWIM [15], an already exist- ing mobility model shown to generate traces with similar properties of that of existing real ones. We make SWIM able to easily generate large (small)-scale scenarios, starting from known small (large)-scale ones. To the best of our knowledge, this is the first such study. In addition, we study the social aspects of SWIM-generated traces. We show how to SWIM-generate a scenario in which a specific community structure of nodes is required. Finally, exploiting the scaling properties of SWIM, we present the first analysis of the scal- ing capabilities of several forwarding protocols such as Epidemic [16], Delegation [13], Spray&Wait [14], and BUBBLE [12]. The first results of these works appeared in [1], and, at the time of writing, [2] is accepted with minor revision. Next, we take into account the fact that in PSNs cannot be assumed full cooperation and fairness among nodes. Selfish behavior of individuals has to be considered, since it is an inherent aspect of humans, the device holders (see [17], [18]). We design a market-based mathematical framework that enables heterogeneous mobile users in an opportunistic mobile network to compromise optimally and efficiently on their QoS 3 demands. The goal of the framework is to satisfy each user with its achieved (lesser) QoS, and at the same time maximize the social welfare of users in the network. We base our study on the consideration that, in practice, users are generally tolerant on accepting lesser QoS guarantees than what they demand, with the degree of tolerance varying from user to user. This study is described in details in Chapter 2 of this dissertation, and is included in [3]. In general, QoS could be parameters such as response time, number of computations per unit time, allocated bandwidth, etc. Along the way toward our study of the smartphone-world, there was the big advent of mobile cloud computing—smartphones getting help from cloud-enabled services. Many researchers started believing that the cloud could help solving a crucial problem regarding smartphones: improve battery life. New generation apps are becoming very complex — gaming, navigation, video editing, augmented reality, speech recognition, etc., — which require considerable amount of power and energy, and as a result, smartphones suffer short battery lifetime. Unfortunately, as a consequence, mobile users have to continually upgrade their hardware to keep pace with increasing performance requirements but still experience battery problems. Many recent works have focused on building frameworks that enable mobile computation offloading to software clones of smartphones on the cloud (see [19, 20] among others), as well as to backup systems for data and applications stored in our devices [21, 22, 23]. However, none of these address dynamic and scalability features of execution on the cloud. These are very important problems, since users may request different computational power or backup space based on their workload and deadline for tasks. Considering this and advancing on previous works, we design, build, and implement the ThinkAir framework, which focuses on the elasticity and scalability of the server side and enhances the power of mobile cloud computing by parallelizing method execution using multiple Virtual Machine (VM) images. We evaluate the system using a range of benchmarks starting from simple micro-benchmarks to more complex applications. First, we show that the execution time and energy consumption decrease two orders of magnitude for the N-queens puzzle and one order of magnitude for a face detection and a virus scan application, using cloud offloading. We then show that a parallelizable application can invoke multiple VMs to execute in the cloud in a seamless and on-demand manner such as to achieve greater reduction on execution time and energy consumption. Finally, we use a memory-hungry image combiner tool to demonstrate that applications can dynamically request VMs with more computational power in order to meet their computational requirements. The details of the ThinkAir framework and its evaluation are described in Chapter 4, and are included in [6, 5]. Later on, we push the smartphone-cloud paradigm to a further level: We develop Clone2Clone (C2C), a distributed platform for cloud clones of smartphones. Along the way toward C2C, we study the performance of device-clones hosted in various virtualization environments in both private (local servers) and public (Amazon EC2) clouds. We build the first Amazon Customized Image (AMI) for Android-OS—a key tool to get reliable performance measures of mobile cloud systems—and show how it boosts up performance of Android images on the Amazon cloud service. We then design, build, and implement Clone2Clone, which associates a software clone on the cloud to every smartphone and in- terconnects the clones in a p2p fashion exploiting the networking service within the cloud. On top of C2C we build CloneDoc, a secure real-time collaboration system for smartphone users. We measure the performance of CloneDoc on a testbed of 16 Android smartphones and clones hosted on both private and public cloud services and show that C2C makes it possible to implement distributed execution of advanced p2p services in a network of mobile smartphones. The design and implementation of the Clone2Clone platform is included in [7], recently submitted to an international conference. We believe that Clone2Clone not only enables the execution of p2p applications in a network of smartphones, but it can also serve as a tool to solve critical security problems. In particular, we consider the problem of computing an efficient patching strategy to stop worm spreading between smartphones. We assume that the worm infects the devices and spreads by using bluetooth connections, emails, or any other form of communication used by the smartphones. The C2C network is used to compute the best strategy to patch the smartphones in such a way that the number of devices to patch is low (to reduce the load on the cellular infrastructure) and that the worm is stopped quickly. We consider two well defined worms, one spreading between the devices and one attacking the cloud before moving to the real smartphones. We describe CloudShield [8], a suite of protocols running on the peer-to-peer network of clones; and show by experiments with two different datasets (Facebook and LiveJournal) that CloudShield outperforms state-of-the-art worm-containment mechanisms for mobile wireless networks. This work is done in collaboration with Marco Valerio Barbera, PhD colleague in the same department, who contributed mainly in the implementation and testing of the malware spreading and patching strategies on the different datasets. The communication between the real devices and the cloud, necessary for mobile com- putation offloading and smartphone data backup, does certainly not come for free. To the best of our knowledge, none of the works related to mobile cloud computing explicitly studies the actual overhead in terms of bandwidth and energy to achieve full backup of both data/applications of a smartphone, as well as to keep, on the cloud, up-to-date clones of smartphones for mobile computation offload purposes. In the last work during my PhD—again, in collaboration with Marco Valerio Barbera—we studied the feasibility of both mobile computation offloading and mobile software/data backup in real-life scenarios. This joint work resulted in a recent publication [9] but is not included in this thesis. As in C2C, we assume an architecture where each real device is associated to a software clone on the cloud. We define two types of clones: The off-clone, whose purpose is to support computation offloading, and the back-clone, which comes to use when a restore of user’s data and apps is needed. We measure the bandwidth and energy consumption incurred in the real device as a result of the synchronization with the off-clone or the back-clone. The evaluation is performed through an experiment with 11 Android smartphones and an equal number of clones running on Amazon EC2. We study the data communication overhead that is necessary to achieve different levels of synchronization (once every 5min, 30min, 1h, etc.) between devices and clones in both the off-clone and back-clone case, and report on the costs in terms of energy incurred by each of these synchronization frequencies as well as by the respective communication overhead. My contribution in this work is focused mainly on the experimental setup, deployment, and data collection

    Towards efficacy and efficiency in sparse delay tolerant networks

    Get PDF
    The ubiquitous adoption of portable smart devices has enabled a new way of communication via Delay Tolerant Networks (DTNs), whereby messages are routed by the personal devices carried by ever-moving people. Although a DTN is a type of Mobile Ad Hoc Network (MANET), traditional MANET solutions are ill-equipped to accommodate message delivery in DTNs due to the dynamic and unpredictable nature of people\u27s movements and their spatio-temporal sparsity. More so, such DTNs are susceptible to catastrophic congestion and are inherently chaotic and arduous. This manuscript proposes approaches to handle message delivery in notably sparse DTNs. First, the ChitChat system [69] employs the social interests of individuals participating in a DTN to accurately model multi-hop relationships and to make opportunistic routing decisions for interest-annotated messages. Second, the ChitChat system is hybridized [70] to consider both social context and geographic information for learning the social semantics of locations so as to identify worthwhile routing opportunities to destinations and areas of interest. Network density analyses of five real-world datasets is conducted to identify sparse datasets on which to conduct simulations, finding that commonly-used datasets in past DTN research are notably dense and well connected, and suggests two rarely used datasets are appropriate for research into sparse DTNs. Finally, the Catora system is proposed to address congestive-driven degradation of service in DTNs by accomplishing two simultaneous tasks: (i) expedite the delivery of higher quality messages by uniquely ordering messages for transfer and delivery, and (ii) avoid congestion through strategic buffer management and message removal. Through dataset-driven simulations, these systems are found to outperform the state-of-the-art, with ChitChat facilitating delivery in sparse DTNs and Catora unencumbered by congestive conditions --Abstract, page iv

    Socio-economic aware data forwarding in mobile sensing networks and systems

    Get PDF
    The vision for smart sustainable cities is one whereby urban sensing is core to optimising city operation which in turn improves citizen contentment. Wireless Sensor Networks are envisioned to become pervasive form of data collection and analysis for smart cities but deployment of millions of inter-connected sensors in a city can be cost-prohibitive. Given the ubiquity and ever-increasing capabilities of sensor-rich mobile devices, Wireless Sensor Networks with Mobile Phones (WSN-MP) provide a highly flexible and ready-made wireless infrastructure for future smart cities. In a WSN-MP, mobile phones not only generate the sensing data but also relay the data using cellular communication or short range opportunistic communication. The largest challenge here is the efficient transmission of potentially huge volumes of sensor data over sometimes meagre or faulty communications networks in a cost-effective way. This thesis investigates distributed data forwarding schemes in three types of WSN-MP: WSN with mobile sinks (WSN-MS), WSN with mobile relays (WSN-HR) and Mobile Phone Sensing Systems (MPSS). For these dynamic WSN-MP, realistic models are established and distributed algorithms are developed for efficient network performance including data routing and forwarding, sensing rate control and and pricing. This thesis also considered realistic urban sensing issues such as economic incentivisation and demonstrates how social network and mobility awareness improves data transmission. Through simulations and real testbed experiments, it is shown that proposed algorithms perform better than state-of-the-art schemes.Open Acces

    Modeling Human Mobility Entropy as a Function of Spatial and Temporal Quantizations

    Get PDF
    The knowledge of human mobility is an integral component of several different branches of research and planning, including delay tolerant network routing, cellular network planning, disease prevention, and urban planning. The uncertainty associated with a person's movement plays a central role in movement predictability studies. The uncertainty can be quantified in a succinct manner using entropy rate, which is based on the information theoretic entropy. The entropy rate is usually calculated from past mobility traces. While the uncertainty, and therefore, the entropy rate depend on the human behavior, the entropy rate is not invariant to spatial resolution and sampling interval employed to collect mobility traces. The entropy rate of a person is a manifestation of the observable features in the person's mobility traces. Like entropy rate, these features are also dependent on spatio-temporal quantization. Different mobility studies are carried out using different spatio-temporal quantization, which can obscure the behavioral differences of the study populations. But these behavioral differences are important for population-specific planning. The goal of dissertation is to develop a theoretical model that will address this shortcoming of mobility studies by separating parameters pertaining to human behavior from the spatial and temporal parameters
    • …
    corecore