67 research outputs found

    Adaptive Fuzzy Spray and Wait: Efficient Routing for Opportunistic Networks

    Get PDF
    The technological advancement in the area of wireless networking is ultimately envisioned to reach complete and seamless ubiquity, where every point on earth will need to be covered by Internet access. Low connectivity environments have emerged as a major challenge, and accordingly Opportunistic Networks arose as a promising solution. While these networks do not assume the existence of a path from the source to the destination, they opportunistically utilize any possible resource available to maximize throughput. Routing protocols in such environments have always tried to target an increased delivery probability, a shorter delay, and a reduced overhead. In this work, we try to balance these apparently conflicting goals by introducing “Adaptive Fuzzy Spray and Wait”, an optimized routing scheme for opportunistic networks. On top of the overhead reduction, we argue that the spray-based opportunistic routing techniques can attain higher delivery probability through integrating the adequate buffer prioritization and dropping policies. Towards that purpose, we employ a fuzzy decision making scheme. We also tackle the limitations of the previous approaches by allowing a full-adaptation to the varying network parameters. Extensive simulations using the ONE (Opportunistic Network Environment) simulator [1] show the robustness and effectiveness of the algorithm under challenged network conditions

    A Collaborative Service Discovery and Service Sharing Framework for Mobile Ad Hoc Networks

    Get PDF
    Abstract. Service sharing and discovery play a relevant role in mobile ad hoc environments. Upon joining a self-organizing network, mobile nodes should be able to explore the environment to learn about, locate, and share the available services. In this paper, we propose a distributed and scalable service discovery and sharing framework for ad hoc networks. The proposed framework defines three types of nodes: service directories, service providers and requesting nodes. Service directory nodes act as mediators for lookup requests from requesting nodes. Joining service provider nodes register their services with the nearest service directory. A requesting node discovers the available services by submitting requests to its nearest service directory which determines the node providing the requesting service. The performance of the proposed model is evaluated and compared to the broadcast-based model that has been extensively studied in the literature

    ACCOP: Adaptive Cost-Constrained and Delay- Optimized Data Allocation over Parallel Opportunistic Networks

    Get PDF
    As wireless and mobile technologies are becoming increasingly pervasive, an uninterrupted connectivity in mobile devices is becoming a necessity rather than a luxury. When dealing with challenged networking environments, this necessity becomes harder to achieve in the absence of end-to-end paths from servers to mobiles. One of the main techniques employed to such conditions is to simultaneously use parallel available networks. In this work, we tackle the problem of data allocation to parallel networks in challenged environments, targeting a minimized delay while abiding by user preset budget. We propose ACCOP, an Adaptive, Cost-Constrained, and delay-OPtimized data-to-channel allocation scheme that efficiently exploits parallel channels typically accessible from the mobile devices. Our technique replaces the traditional, inefficient, and brute-force schemes through employing Lagrange multipliers to minimize the delivery delay. Furthermore, we show how ACCOP can dynamically adjust to the changing network conditions. Through analytical and experimental tools, we demonstrate that our system achieves faster delivery and higher performance while remaining computationally inexpensive

    Edge-centric queries stream management based on an ensemble model

    Get PDF
    The Internet of things (IoT) involves numerous devices that can interact with each other or with their environment to collect and process data. The collected data streams are guided to the cloud for further processing and the production of analytics. However, any processing in the cloud, even if it is supported by improved computational resources, suffers from an increased latency. The data should travel to the cloud infrastructure as well as the provided analytics back to end users or devices. For minimizing the latency, we can perform data processing at the edge of the network, i.e., at the edge nodes. The aim is to deliver analytics and build knowledge close to end users and devices minimizing the required time for realizing responses. Edge nodes are transformed into distributed processing points where analytics queries can be served. In this paper, we deal with the problem of allocating queries, defined for producing knowledge, to a number of edge nodes. The aim is to further reduce the latency by allocating queries to nodes that exhibit low load (the current and the estimated); thus, they can provide the final response in the minimum time. However, before the allocation, we should decide the computational burden that a query will cause. The allocation is concluded by the assistance of an ensemble similarity scheme responsible to deliver the complexity class for each query. The complexity class, thus, can be matched against the current load of every edge node. We discuss our scheme, and through a large set of simulations and the adoption of benchmarking queries, we reveal the potentials of the proposed model supported by numerical results

    Wavelet methods in digital communications

    No full text

    Application of the Topological Interference Management Method in Practical Scenarios

    No full text
    International audienc
    • …
    corecore