51 research outputs found
Recommended from our members
Improving Resilience of Communication in Information Dissemination for Time-Critical Applications
Severe weather impacts life and in this dire condition, people rely on communication, to organize relief and stay in touch with their loved ones. In such situations, cellular network infrastructure\footnote{We refer to cellular network infrastructure as infrastructure for the entirety of this document} might be affected due to power outage, link failures, etc. This urges us to look at Ad-hoc mode of communication, to offload major traffic partially or fully from the infrastructure, depending on the status of it.
We look into threefold approach, ranging from the case where the infrastructure is completely unavailable, to where it has been replaced by make shift low capacity mobile cellular base station.
First, we look into communication without infrastructure and timely, dissemination of weather alerts specific to geographical areas. We look into the specific case of floods as they affect significant number of people. Due to the nature of the problem we can utilize the properties of Information Centric Networking (ICN) in this context, namely: i) Flexibility and high failure resistance: Any node in the network that has the information can satisfy the query ii) Robust: Only sensor and car need to communicate iii) Fine grained geo-location specific information dissemination. We analyze how message forwarding using ICN on top of Ad hoc network, approach compares to the one based on infrastructure, that is less resilient in the case of disaster. In addition, we compare the performance of different message forwarding strategies in VANETs (Vehicular Adhoc Networks) using ICN. Our results show that ICN strategy outperforms the infrastructure-based approach as it is 100 times faster for 63\% of total messages delivered.
Then we look into the case where we have the cellular network infrastructure, but it is being pressured due to rapid increase in volume of network traffic (as seen during a major event) or it has been replaced by low capacity mobile tower. In this case we look at offloading as much traffic as possible from the infrastructure to device-to-device communication. However, the host-oriented model of the TCP/IP-based Internet poses challenges to this communication pattern. A scheme that uses an ICN model to fetch content from nearby peers, increases the resiliency of the network in cases of outages and disasters. We collected content popularity statistics from social media to create a content request pattern and evaluate our approach through the simulation of realistic urban scenarios. Additionally, we analyze the scenario of large crowds in sports venues. Our simulation results show that we can offload traffic from the backhaul network by up to 51.7\%, suggesting an advantageous path to support the surge in traffic while keeping complexity and cost for the network operator at manageable levels.
Finally, we look at adaptive bit-rate streaming (ABR) streaming, which has contributed significantly to the reduction of video playout stalling, mainly in highly variable bandwidth conditions. ABR clients continue to suffer from the variation of bit rate qualities over the duration of a streaming session. Similar to stalling, these variations in bit rate quality have a negative impact on the users’ Quality of Experience (QoE). We use a trace from a large-scale CDN to show that such quality changes occur in a significant amount of streaming sessions and investigate an ABR video segment retransmission approach to reduce the number of such quality changes. As the new HTTP/2 standard is becoming increasingly popular, we also see an increase in the usage of HTTP/2 as an alternative protocol for the transmission of web traffic including video streaming. Using various network conditions, we conduct a systematic comparison of existing transport layer approaches for HTTP/2 that is best suited for ABR segment retransmissions. Since it is well known that both protocols provide a series of improvements over HTTP/1.1, we perform experiments both in controlled environments and over transcontinental links in the Internet and find that these benefits also “trickle up” into the application layer when it comes to ABR video streaming where HTTP/2 retransmissions can significantly improve the average quality bitrate while simultaneously minimizing bit rate variations over the duration of a streaming session. Taking inspiration from the first two approaches, we take into account the resiliency of a multi-path approach and further look at a multi-path and multi-stream approach to ABR streaming and demonstrate that losses on one path have very little impact on the other from the same multi-path connection and this increases throughput and resiliency of communication
Information-centric communication in mobile and wireless networks
Information-centric networking (ICN) is a new communication paradigm that has been proposed to cope with drawbacks of host-based communication protocols, namely scalability and security. In this thesis, we base our work on Named Data Networking (NDN), which is a popular ICN architecture, and investigate NDN in the context of wireless and mobile ad hoc networks.
In a first part, we focus on NDN efficiency (and potential improvements) in wireless environments by investigating NDN in wireless one-hop communication, i.e., without any routing protocols. A basic requirement to initiate informationcentric communication is the knowledge of existing and available content names. Therefore, we develop three opportunistic content discovery algorithms and evaluate them in diverse scenarios for different node densities and content distributions. After content names are known, requesters can retrieve content opportunistically from any neighbor node that provides the content. However, in case of short contact times to content sources, content retrieval may be disrupted. Therefore, we develop a requester application that keeps meta information of disrupted content retrievals and enables resume operations when a new content source has been found. Besides message efficiency, we also evaluate power consumption of information-centric broadcast and unicast communication. Based on our findings, we develop two mechanisms to increase efficiency of information-centric wireless one-hop communication. The first approach called Dynamic Unicast (DU) avoids broadcast communication whenever possible since broadcast transmissions result in more duplicate Data transmissions, lower data rates and higher energy consumption on mobile nodes, which are not interested in overheard Data, compared to unicast communication. Hence, DU uses broadcast communication only until a content source has been found and then retrieves content directly via unicast from the same source. The second approach called RC-NDN targets efficiency of wireless broadcast communication by reducing the number of duplicate Data transmissions. In particular, RC-NDN is a Data encoding scheme for content sources that increases diversity in wireless broadcast transmissions such that multiple concurrent requesters can profit from each others’ (overheard) message transmissions.
If requesters and content sources are not in one-hop distance to each other, requests need to be forwarded via multi-hop routing. Therefore, in a second part of this thesis, we investigate information-centric wireless multi-hop communication. First, we consider multi-hop broadcast communication in the context of rather static community networks. We introduce the concept of preferred forwarders, which relay Interest messages slightly faster than non-preferred forwarders to reduce redundant duplicate message transmissions. While this approach works well in static networks, the performance may degrade in mobile networks if preferred forwarders may regularly move away. Thus, to enable routing in mobile ad hoc networks, we extend DU for multi-hop communication. Compared to one-hop communication, multi-hop DU requires efficient path update mechanisms (since multi-hop paths may expire quickly) and new forwarding strategies to maintain NDN benefits (request aggregation and caching) such that only a few messages need to be transmitted over the entire end-to-end path even in case of multiple concurrent requesters. To perform quick retransmission in case of collisions or other transmission errors, we implement and evaluate retransmission timers from related work and compare them to CCNTimer, which is a new algorithm that enables shorter content retrieval times in information-centric wireless multi-hop communication. Yet, in case of intermittent connectivity between requesters and content sources, multi-hop routing protocols may not work because they require continuous end-to-end paths. Therefore, we present agent-based content retrieval (ACR) for delay-tolerant networks. In ACR, requester nodes can delegate content retrieval to mobile agent nodes, which move closer to content sources, can retrieve content and return it to requesters. Thus, ACR exploits the mobility of agent nodes to retrieve content from remote locations. To enable delay-tolerant communication via agents, retrieved content needs to be stored persistently such that requesters can verify its authenticity via original publisher signatures. To achieve this, we develop a persistent caching concept that maintains received popular content in repositories and deletes unpopular content if free space is required. Since our persistent caching concept can complement regular short-term caching in the content store, it can also be used for network caching to store popular delay-tolerant content at edge routers (to reduce network traffic and improve network performance) while real-time traffic can still be maintained and served from the content store
Recommended from our members
Service Competition and Data-Centric Protocols for Internet Access
The Internet evolved in many aspects, from the application to the physical layers. However, the evolution of the Internet access technologies, most visible in dense urban scenarios, is not easily noticeable in sparsely populated and rural areas.
In the United States, for example, the FCC identified that 50% of the census blocks have access to up to two broadband providers; however, these providers do not necessarily compete. Additionally, due to the methodology of the study, there is evidence that the number of actual customers without broadband access is higher since the FCC considers the entire block to have broadband if any customer in a block has broadband. Moreover, the average downstream connection bandwidth in the United States is 18.7 Mbps, according to the Akamai State of the Internet report, which places the US in the 10th position in the global rank. It’s worth noting that modern applications such as Ultra High Definition (UHD) video streaming requires a bandwidth of at least 25 Mbps. Newer applications such as virtual reality streaming require at least a 50 Mbps bandwidth. Additionally, urban scenarios are dominated by monopolistic and duopolistic markets, whereby network providers have little incentives to offer innovative services. In this work, we propose an open access network infrastructure along with a novel Internet architecture that allows dynamic economic relationships between users and providers through a marketplace of network services. These economic relationships have a finer granularity than today’s coarse and lengthy contracts, allowing higher competition and promoting innovation in the access market. We develop an agent-based simulator to evaluate our proposed network model and its various competition scenarios. Our simulations show that competition greatly benefits users and applications, creating the necessary incentives for providers to innovate while also benefiting consumers.
The trend that resulted in sparsely populated areas lagging of the latest innovations in the access networks is also observed in wireless access networks, where the investments are focused on densely populated areas. Moreover, the rapidly increasing number of mobile devices coupled with the increasingly bandwidth demanding applications are posing a significant challenge to cellular network operators that have to increase OPEX/CAPEX and deal with higher complexity in their networks.
The advances in the access technologies that brought higher speeds and lower latency also reduced the area of coverage of cellular base stations. To cope with the increase in traffic, cellular network operators have been deploying more base stations. In addition, cellular providers have adopted “all-you-can-use” price models, which led users to ramp-up their usage, further worsening congestion in the network.
To address this issue, we propose a scheme that uses Device-to-Device (D2D) communication along with Information-Centric Networking (ICN) to offload traffic from cellular base stations. Then, we build on this scheme and propose a cross-layer assisted forwarding strategy to enhance communication in the MANET. In D2D communication, users can retrieve content directly from their nearby peers. However, this type of communication poses challenges to the current connection-oriented communication model, as devices can move in and out of the communication range at any time, constantly changing routing state, and nodes are subject to hidden and exposed terminal problems. ICN addresses some of these issues with inherent support for transparent caching and named content retrieval, making the network more resilient to disconnections. Our proposed scheme can offload up to 51.7% of the contents from the backhaul cellular infrastructure when requesting the content from nearby peers first.
Finally, we combine the concepts of the marketplace, D2D communication, and ICN to propose a platform for decentralized and opportunistic communication that uses COTS radios to relay packets, extending the reach of the Internet to sparsely populated areas with low cost and without the lengthy contracts from commercial network providers. Our platform can potentially link the remaining part of the population that is not currently connected to the Internet
Software Defined Application Delivery Networking
In this thesis we present the architecture, design, and prototype implementation details of AppFabric. AppFabric is a next generation application delivery platform for easily creating, managing and controlling massively distributed and very dynamic application deployments that may span multiple datacenters.
Over the last few years, the need for more flexibility, finer control, and automatic management of large (and messy) datacenters has stimulated technologies for virtualizing the infrastructure components and placing them under software-based management and control; generically called Software-defined Infrastructure (SDI). However, current applications are not designed to leverage this dynamism and flexibility offered by SDI and they mostly depend on a mix of different techniques including manual configuration, specialized appliances (middleboxes), and (mostly) proprietary middleware solutions together with a team of extremely conscientious and talented system engineers to get their applications deployed and running. AppFabric, 1) automates the whole control and management stack of application deployment and delivery, 2) allows application architects to define logical workflows consisting of application servers, message-level middleboxes, packet-level middleboxes and network services (both, local and wide-area) composed over application-level routing policies, and 3) provides the abstraction of an application cloud that allows the application to dynamically (and automatically) expand and shrink its distributed footprint across multiple geographically distributed datacenters operated by different cloud providers. The architecture consists of a hierarchical control plane system called Lighthouse and a fully distributed data plane design (with no special hardware components such as service orchestrators, load balancers, message brokers, etc.) called OpenADN . The current implementation (under active development) consists of ~10000 lines of python and C code.
AppFabric will allow applications to fully leverage the opportunities provided by modern virtualized Software-Defined Infrastructures. It will serve as the platform for deploying massively distributed, and extremely dynamic next generation application use-cases, including:
Internet-of-Things/Cyber-Physical Systems: Through support for managing distributed gather-aggregate topologies common to most Internet-of-Things(IoT) and Cyber-Physical Systems(CPS) use-cases. By their very nature, IoT and CPS use cases are massively distributed and have different levels of computation and storage requirements at different locations. Also, they have variable latency requirements for their different distributed sites. Some services, such as device controllers, in an Iot/CPS application workflow may need to gather, process and forward data under near-real time constraints and hence need to be as close to the device as possible. Other services may need more computation to process aggregated data to drive long term business intelligence functions. AppFabric has been designed to provide support for such very dynamic, highly diversified and massively distributed application use-cases.
Network Function Virtualization: Through support for heterogeneous workflows, application-aware networking, and network-aware application deployments, AppFabric will enable new partnerships between Application Service Providers (ASPs) and Network Service Providers (NSPs). An application workflow in AppFabric may comprise of application services, packet and message-level middleboxes, and network transport services chained together over an application-level routing substrate. The Application-level routing substrate allows policy-based service chaining where the application may specify policies for routing their application traffic over different services based on application-level content or context.
Virtual worlds/multiplayer games: Through support for creating, managing and controlling dynamic and distributed application clouds needed by these applications. AppFabric allows the application to easily specify policies to dynamically grow and shrink the application\u27s footprint over different geographical sites, on-demand.
Mobile Apps: Through support for extremely diversified and very dynamic application contexts typical of such applications. Also, AppFabric provides support for automatically managing massively distributed service deployment and controlling application traffic based on application-level policies. This allows mobile applications to provide the best Quality-of-Experience to its users without
This thesis is the first to handle and provide a complete solution for such a complex and relevant architectural problem that is expected to touch each of our lives by enabling exciting new application use-cases that are not possible today. Also, AppFabric is a non-proprietary platform that is expected to spawn lots of innovations both in the design of the platform itself and the features it provides to applications. AppFabric still needs many iterations, both in terms of design and implementation maturity. This thesis is not the end of journey for AppFabric but rather just the beginning
Energy-efficient Transitional Near-* Computing
Studies have shown that communication networks, devices accessing the Internet, and data centers account for 4.6% of the worldwide electricity consumption.
Although data centers, core network equipment, and mobile devices are getting more energy-efficient, the amount of data that is being processed, transferred, and stored is vastly increasing.
Recent computer paradigms, such as fog and edge computing, try to improve this situation by processing data near the user, the network, the devices, and the data itself.
In this thesis, these trends are summarized under the new term near-* or near-everything computing.
Furthermore, a novel paradigm designed to increase the energy efficiency of near-* computing is proposed: transitional computing.
It transfers multi-mechanism transitions, a recently developed paradigm for a highly adaptable future Internet, from the field of communication systems to computing systems.
Moreover, three types of novel transitions are introduced to achieve gains in energy efficiency in near-* environments, spanning from private Infrastructure-as-a-Service (IaaS) clouds, Software-defined Wireless Networks (SDWNs) at the edge of the network, Disruption-Tolerant Information-Centric Networks (DTN-ICNs) involving mobile devices, sensors, edge devices as well as programmable components on a mobile System-on-a-Chip (SoC).
Finally, the novel idea of transitional near-* computing for emergency response applications is presented
to assist rescuers and affected persons during an emergency event or a disaster, although connections to cloud services and social networks might be disturbed by network outages, and network bandwidth and battery power of mobile devices might be limited
On information filtering in social sensing
For decades, from the invention of Sensor Networks, people envisioned a global sensing platform with millions of sensors deployed globally. The platform has finally become real recently with the advent of multiple online social network services where humans act as sensors and the social networks act as sensor networks, a practice named Social Sensing. Social sensing was born with the advances of high-level semantics sensing (since humans are the “sensors” with texts or photos as the sensing data) and (almost) zero-cost real-time data infrastructure, which makes this new sensing paradigm very promising in multiple real-world applications including disaster response and global event discovery. However, its global scale results in a massive amount of data generated and collected in applications that far exceeds normal people’s cognitive capability of information consumption, thus we desire a system that can filter the massive sensing data and delivers only information and intelligence to the users with a human-consumable amount.
In this thesis, I focus on designing an information filtering system for social sensing; specifically, I focus on three levels of information filtering. In the first level, we focus on untruthful information removal, also known as fact-finding, where the challenge lies in the unknown reliability of each individual social sensor (i.e. human) a prior. In the second level, we focus on event-level information summary, also known as event detection, where the challenge lies in de-multiplexing different event instances and fusing social events detected in multiple social networks that previous approaches do not perform well. In the third level, we focus on information-maximizing data delivery to social sensing users, especially on redundancy removal by diversifying the information feed, where the challenge lies in algorithm design that not only works well empirically but also has a theoretical performance guarantee. We address the above challenges by algorithm design and system implementation and real-world data evaluations verify the efficiency of our proposed solutions
Secure, Reliable and Efficient Data Integrity Auditing (DIA) Solution for Public Cloud Storage (PCS)
- …