518 research outputs found

    In Vivo Evaluation of the Secure Opportunistic Schemes Middleware using a Delay Tolerant Social Network

    Full text link
    Over the past decade, online social networks (OSNs) such as Twitter and Facebook have thrived and experienced rapid growth to over 1 billion users. A major evolution would be to leverage the characteristics of OSNs to evaluate the effectiveness of the many routing schemes developed by the research community in real-world scenarios. In this paper, we showcase the Secure Opportunistic Schemes (SOS) middleware which allows different routing schemes to be easily implemented relieving the burden of security and connection establishment. The feasibility of creating a delay tolerant social network is demonstrated by using SOS to power AlleyOop Social, a secure delay tolerant networking research platform that serves as a real-life mobile social networking application for iOS devices. SOS and AlleyOop Social allow users to interact, publish messages, and discover others that share common interests in an intermittent network using Bluetooth, peer-to-peer WiFi, and infrastructure WiFi.Comment: 6 pages, 4 figures, accepted in ICDCS 2017. arXiv admin note: text overlap with arXiv:1702.0565

    Code offloading in opportunistic computing

    Get PDF
    With the advent of cloud computing, applications are no longer tied to a single device, but they can be migrated to a high-performance machine located in a distant data center. The key advantage is the enhancement of performance and consequently, the users experience. This activity is commonly referred computational offloading and it has been strenuously investigated in the past years. The natural candidate for computational offloading is the cloud, but recent results point out the hidden costs of cloud reliance in terms of latency and energy; Cuervo et. al. illustrates the limitations on cloud-based computational offloading based on WANs latency times. The dissertation confirms the results of Cuervo et. al. and illustrates more use cases where the cloud may not be the right choice. This dissertation addresses the following question: is it possible to build a novel approach for offloading the computation that overcomes the limitations of the state-of-the-art? In other words, is it possible to create a computational offloading solution that is able to use local resources when the Cloud is not usable, and remove the strong bond with the local infrastructure? To this extent, I propose a novel paradigm for computation offloading named anyrun computing, whose goal is to use any piece of higher-end hardware (locally or remotely accessible) to offloading a portion of the application. With anyrun computing I removed the boundaries that tie the solution to an infrastructure by adding locally available devices to augment the chances to succeed in offloading. To achieve the goals of the dissertation it is fundamental to have a clear view of all the steps that take part in the offloading process. To this extent, I firstly provided a categorization of such activities combined with their interactions and assessed the impact on the system. The outcome of the analysis is the mapping to the problem to a combinatorial optimization problem that is notoriously known to be NP-Hard. There are a set of well-known approaches to solving such kind of problems, but in this scenario, they cannot be used because they require a global view that can be only maintained by a centralized infrastructure. Thus, local solutions are needed. Moving further, to empirically tackle the anyrun computing paradigm, I propose the anyrun computing framework (ARC), a novel software framework whose objective is to decide whether to offload or not to any resource-rich device willing to lend assistance is advantageous compared to local execution with respect to a rich array of performance dimensions. The core of ARC is the nference nodel which receives a rich set of information about the available remote devices from the SCAMPI opportunistic computing framework developed within the European project SCAMPI, and employs the information to profile a given device, in other words, it decides whether offloading is advantageous compared to local execution, i.e. whether it can reduce the local footprint compared to local execution in the dimensions of interest (CPU and RAM usage, execution time, and energy consumption). To empirically evaluate ARC I presented a set of experimental results on the cloud, cloudlet, and opportunistic domain. In the cloud domain, I used the state of the art in cloud solutions over a set of significant benchmark problems and with three WANs access technologies (i.e. 3G, 4G, and high-speed WAN). The main outcome is that the cloud is an appealing solution for a wide variety of problems, but there is a set of circumstances where the cloud performs poorly. Moreover, I have empirically shown the limitations of cloud-based approaches, specifically, In some circumstances, problems with high transmission costs tend to perform poorly, unless they have high computational needs. The second part of the evaluation is done in opportunistic/cloudlet scenarios where I used my custom-made testbed to compare ARC and MAUI, the state of the art in computation offloading. To this extent, I have performed two distinct experiments: the first with a cloudlet environment and the second with an opportunistic environment. The key outcome is that ARC virtually matches the performances of MAUI (in terms of energy savings) in cloudlet environment, but it improves them by a 50% to 60% in the opportunistic domain

    Leveraging and Fusing Civil and Military Sensors to support Disaster Relief Operations in Smart Environments

    Get PDF
    Natural disasters occur unpredictably and can range in severity from something locally manageable to large scale events that require external intervention. In particular, when large scale disasters occur, they can cause widespread damage and overwhelm the ability of local governments and authorities to respond. In such situations, Civil-Military Cooperation (CIMIC) is essential for a rapid and robust Humanitarian Assistance and Disaster Relief (HADR) operation. These type of operations bring to bear the Command and Control (C2) and Logistics capabilities of the military to rapidly deploy assets to help with the disaster relief activities. Smart Cities and Smart Environments, embedded with IoT, introduce multiple sensing modalities that typically provide wide coverage over the deployed area. Given that the military does not own or control these assets, they are sometimes referred to as gray assets, which are not as trustworthy as blue assets, owned by the military. However, leveraging these gray assets can significantly improve the ability for the military to quickly obtain Situational Awareness (SA) about the disaster and optimize the planning of rescue operations and allocation of resources to achieve the best possible effects. Fusing the information from the civilian IoT sensors with the custom military sensors could help validate and improve trust in the information from the gray assets. The focus of this paper is to further examine this challenge of achieving Civil-Military cooperation for HADR operations by leveraging and fusing information from gray and blue assets

    Security and Routing in a Disconnected Delay Tolerant Network

    Get PDF
    Providing internet access in disaster-affected areas where there is little to no internet connectivity is extremely difficult. This paper proposes an architecture that utilizes existing hardware and mobile applications to enable users to access the Internet while maintaining a high level of security. The system comprises a client application, a transport application, and a server running on the cloud. The client combines data from all supported applications into a single bundle, which is encrypted using an end-to-end encryption technique and sent to the transport. The transport physically moves the bundles to a connected area and forwards them to the server. The server decrypts the bundles and forwards them to the respective application servers. The result is then returned to the original client application via the network of transports used previously. This solution provides a convenient way to establish connectivity in disconnected areas without additional hardware and can accommodate various application data. Furthermore, it ensures data integrity, confidentiality, and authentication by encrypting and validating the data during transmission

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Big Data and Large-scale Data Analytics: Efficiency of Sustainable Scalability and Security of Centralized Clouds and Edge Deployment Architectures

    Get PDF
    One of the significant shifts of the next-generation computing technologies will certainly be in the development of Big Data (BD) deployment architectures. Apache Hadoop, the BD landmark, evolved as a widely deployed BD operating system. Its new features include federation structure and many associated frameworks, which provide Hadoop 3.x with the maturity to serve different markets. This dissertation addresses two leading issues involved in exploiting BD and large-scale data analytics realm using the Hadoop platform. Namely, (i)Scalability that directly affects the system performance and overall throughput using portable Docker containers. (ii) Security that spread the adoption of data protection practices among practitioners using access controls. An Enhanced Mapreduce Environment (EME), OPportunistic and Elastic Resource Allocation (OPERA) scheduler, BD Federation Access Broker (BDFAB), and a Secure Intelligent Transportation System (SITS) of multi-tiers architecture for data streaming to the cloud computing are the main contribution of this thesis study
    • …
    corecore