82 research outputs found

    Survey And New Approach In Service Discovery And Advertisement For Mobile Ad Hoc Networks.

    Get PDF
    Service advertisement and discovery is an important component for mobile adhoc communications and collaboration in ubiquitous computing environments. The ability to discover services offered in a mobile adhoc network is the major prerequisite for effective usability of these networks. This paper aims to classify and compare existing Service Discovery (SD) protocols for MANETs by grouping them based on their SD strategies and service information accumulation strategies, and to propose an efficient approach for addressing the inherent issues

    Hybrid Hierarchical Approach For Addressing Service Discovery Issues In MANETS.

    Get PDF
    Management of Mobile Ad-hoc Net works (MANETs) is very difficult, because the movement of nodes is unpredictable, frequently changing the topology of the network Consequently, Service Discovery (SD) in the network a perquisite for efficient usage of network resources, is a complex problem

    A Cooperative Cache Management Scheme for IEEE802.15.4 based Wireless Sensor Networks

    Get PDF
    Wireless Sensor Networks (WSNs) based on the IEEE 802.15.4 MAC and PHY layer standards is a recent trend in the market. It has gained tremendous attention due to its low energy consumption characteristics and low data rates. However, for larger networks minimizing energy consumption is still an issue because of the dissemination of large overheads throughout the network. This consumption of energy can be reduced by incorporating a novel cooperative caching scheme to minimize overheads and to serve data with minimal latency and thereby reduce the energy consumption. This paper explores the possibilities to enhance the energy efficiency by incorporating a cooperative caching strategy

    An adaptive approach to service discovery in ad hoc networks

    Get PDF
    Service discovery allows the interaction between network nodes to cooperate in activities or to share resources in client-server, multi-layer, as well as in peer-to-peer architectures. Ad hoc networks pose a great challenge in the design of efficient mechanisms for service discovery. The lack of infrastructure along with node mobility makes it difficult to build robust, scalable and secure mechanisms for ad hoc networks. This paper proposes a scalable service discovery architecture based on directory nodes organized in an overlay network. In the proposed architecture, directory nodes are dynamically created with the aim of uniformly covering the entire network while decreasing the query latency for a service (QoS) and the number of control messages for the sake of increased scalability.8th IFIP/IEEE International conference on Mobile and Wireless CommunicationRed de Universidades con Carreras en Informática (RedUNCI

    An adaptive approach to service discovery in ad hoc networks

    Get PDF
    Service discovery allows the interaction between network nodes to cooperate in activities or to share resources in client-server, multi-layer, as well as in peer-to-peer architectures. Ad hoc networks pose a great challenge in the design of efficient mechanisms for service discovery. The lack of infrastructure along with node mobility makes it difficult to build robust, scalable and secure mechanisms for ad hoc networks. This paper proposes a scalable service discovery architecture based on directory nodes organized in an overlay network. In the proposed architecture, directory nodes are dynamically created with the aim of uniformly covering the entire network while decreasing the query latency for a service (QoS) and the number of control messages for the sake of increased scalability.8th IFIP/IEEE International conference on Mobile and Wireless CommunicationRed de Universidades con Carreras en Informática (RedUNCI

    An Evaluation of Software Distributed Shared Memory for Next-Generation Processors and Networks

    Get PDF
    We evaluate the effect of processor speed, network characteristics, and software overhead on the performance of release-consistent software distributed shared memory. We examine five different protocols for implementing release consistency: eager update, eager invalidate, lazy update, lazy invalidate, and a new protocol called lazy hybrid. This lazy hybrid protocol combines the benefits of both lazy update and lazy invalidate. Our simulations indicate that with the processors and networks that are becoming available, coarse-grained applications such as Jacobi and TSP perform well, more or less independent of the protocol used. Medium-grained applications, such as Water, can achieve good performance, but the choice of protocol is critical. For sixteen processors, the best protocol, lazy hybrid, performed more than three times better than the worst, the eager update. Fine-grained applications such as Cholesky achieve little speedup regardless of the protocol used because of the frequency of synchronization operations and the high latency involved. While the use of relaxed memory models, lazy implementations, and multiple-writer protocols has reduced the impact of false sharing, synchronization latency remains a serious problem for software distributed shared memory systems. These results suggest that future work on software DSMs should concentrate on reducing the amount of synchronization or its effect

    Asynchronous epidemic algorithms for consistency in large-scale systems

    Get PDF
    Achieving and detecting a globally consistent state is essential to many services in the large and extreme-scale distributed systems, especially when the desired consistent state is critical for services operation. Centralised and deterministic approaches for synchronisation and distributed consistency are not scalable and not fault-tolerant. Alternatively, epidemic-based paradigms are decentralised computations based on randomised communications. They are scalable, resilient, fault-tolerant, and converge to the desired target in logarithmic time with respect to system size. Thus, many distributed services have adopted epidemic protocols to achieve the consensus and the consistent state, mainly due to scalability concerns. The convergence of epidemic protocols is stochastically guaranteed. However, the detection of the convergence is probabilistic and non-explicit. In a real-world environment, systems are unreliable, and epidemic protocols cannot converge to the desired state. Thus, achieving convergence by itself does not ensure making a system-wide consistent state under dynamic conditions. The research work presented in this thesis introduces the Phase Transition Algorithm (PTA) to achieve distributed consistent state based on the explicit detection of convergence. Each phase in PTA is a decentralised decision-making process that implements epidemic data aggregation, in which the detection of convergence implies achieving a global agreement. The phases in PTA can be cascaded to achieve higher certainty as desired. Following the PTA, two epidemic protocols, namely PTP and ECP, are proposed to acquire of consensus, i.e. for the consistency in data dissemination and data aggregation. The protocols are examined through simulations, and experimental results have validated the protocols ability to achieve and explicitly detect the consensus among system nodes. The research work has also studied the epidemic data aggregation under nodes churn and network failures, in which the analysis has identified three phases of the aggregation process. The investigations have shown a different impact of nodes churn on each phase. The phase that is critical for the aggregation process has been studied further, which led to propose new robust data aggregation protocols, REAP and REAP+. Each protocol has a different decentralised replication method, and both implements distributed failure detection and instantaneous mass restoration mechanisms. Simulations have validated the protocols, and results have shown protocols ability to converge, detect convergence, and produce competitive accuracy under various levels of nodes churn. Furthermore, distributed consistency in continuous systems is addressed in the research. The work has proposed a novel continuous epidemic protocol with the adaptive restart mechanism. The protocol restarts either upon the detection of system convergence or upon the detection of divergence. Also, the protocol introduces the seed selection method for the peak data distribution in decentralised approaches, which was a challenge that requires single-point initialisation and leader-election step. The simulations validated the performance of the algorithm under static and dynamic conditions and approved that convergence and divergence detection accuracy can be tuned as desired. Finally, the research work shows that combining and integrating of the proposed protocols enables extreme-scale distributed systems to achieve and detect global consistent states even under realistic and dynamical conditions

    Provision of adaptive and context-aware service discovery for the Internet of Things

    Get PDF
    The IoT concept has revolutionised the vision of the future Internet with the advent of standards such as 6LoWPAN making it feasible to extend the Internet into previously isolated environments, e.g., WSNs. The abstraction of resources as services, has opened these environments to a new plethora of potential applications. Moreover, the web service paradigm can be used to provide interoperability by offering a standard interface to interact with these services to enable WoT paradigm. However, these networks pose many challenges, in terms of limited resources, that make the adaptability of existing IP-based solutions infeasible. As traditional service discovery and selection solutions demand heavy communication and use bulky formats, which are unsuitable for these resource-constrained devices incorporating sleep cycles to save energy. Even a registry based approach exhibits burdensome traffic in maintaining the availability status of the devices. The feasible solution for service discovery and selection is instrumental to enable the wide application coverage of these networks in the future. This research project proposes, TRENDY, a new compact and adaptive registry-based SDP with context awareness for the IoT, with more emphasis given to constrained networks, e.g., 6LoWPAN It uses CoAP-based light-weight and RESTful web services to provide standard interoperable interfaces, which can be easily translated from HTTP. TRENDY's service selection mechanism collects and intelligently uses the context information to select appropriate services for user applications based on the available context information of users and services. In addition, TRENDY introduces an adaptive timer algorithm to minimise control overhead for status maintenance, which also reduces energy consumption. Its context-aware grouping technique divides the network at the application layer, by creating location-based groups. This grouping of nodes localises the control overhead and provides the base for service composition, localised aggregation and processing of data. Different grouping roles enable the resource-awareness by offering profiles with varied responsibilities, where high capability devices can implement powerful profiles to share the load of other low capability devices. Thus, it allows the productive usage of network resources. Furthermore, this research project proposes APPUB, an adaptive caching technique, that has the following benefits: it allows service hosts to share their load with the resource directory and also decreases the service invocation delay. The performance of TRENDY and its mechanisms is evaluated using an extensive number of experiments performed using emulated Tmote sky nodes in the COOJA environment. The analysis of the results validates the benefit of performance gain for all techniques. The service selection and APPUB mechanisms improve the service invocation delay considerably that, consequently, reduces the traffic in the network. The timer technique consistently achieved the lowest control overhead, which eventually decreased the energy consumption of the nodes to prolong the network lifetime. Moreover, the low traffic in dense networks decreases the service invocations delay, and makes the solution more scalable. The grouping mechanism localises the traffic, which increases the energy efficiency while improving the scalability. In summary, the experiments demonstrate the benefit of using TRENDY and its techniques in terms of increased energy efficiency and network lifetime, reduced control overhead, better scalability and optimised service invocation time

    Deterministic Object Management in Large Distributed Systems

    Get PDF
    Caching is a widely used technique to improve the scalability of distributed systems. A central issue with caching is maintaining object replicas consistent with their master copies. Large distributed systems, such as the Web, typically deploy heuristic-based consistency mechanisms, which increase delay and place extra load on the servers, while not providing guarantees that cached copies served to clients are up-to-date. Server-driven invalidation has been proposed as an approach to strong cache consistency, but it requires servers to keep track of which objects are cached by which clients. We propose an alternative approach to strong cache consistency, called MONARCH, which does not require servers to maintain per-client state. Our approach builds on a few key observations. Large and popular sites, which attract the majority of the traffic, construct their pages from distinct components with various characteristics. Components may have different content types, change characteristics, and semantics. These components are merged together to produce a monolithic page, and the information about their uniqueness is lost. In our view, pages should serve as containers holding distinct objects with heterogeneous type and change characteristics while preserving the boundaries between these objects. Servers compile object characteristics and information about relationships between containers and embedded objects into explicit object management commands. Servers piggyback these commands onto existing request/response traffic so that client caches can use these commands to make object management decisions. The use of explicit content control commands is a deterministic, rather than heuristic, object management mechanism that gives content providers more control over their content. The deterministic object management with strong cache consistency offered by MONARCH allows content providers to make more of their content cacheable. Furthermore, MONARCH enables content providers to expose internal structure of their pages to clients. We evaluated MONARCH using simulations with content collected from real Web sites. The results show that MONARCH provides strong cache consistency for all objects, even for unpredictably changing ones, and incurs smaller byte and message overhead than heuristic policies. The results also show that as the request arrival rate or the number of clients increases, the amount of server state maintained by MONARCH remains the same while the amount of server state incurred by server invalidation mechanisms grows
    corecore