893 research outputs found

    Middleware Technologies for Cloud of Things - a survey

    Get PDF
    The next wave of communication and applications rely on the new services provided by Internet of Things which is becoming an important aspect in human and machines future. The IoT services are a key solution for providing smart environments in homes, buildings and cities. In the era of a massive number of connected things and objects with a high grow rate, several challenges have been raised such as management, aggregation and storage for big produced data. In order to tackle some of these issues, cloud computing emerged to IoT as Cloud of Things (CoT) which provides virtually unlimited cloud services to enhance the large scale IoT platforms. There are several factors to be considered in design and implementation of a CoT platform. One of the most important and challenging problems is the heterogeneity of different objects. This problem can be addressed by deploying suitable "Middleware". Middleware sits between things and applications that make a reliable platform for communication among things with different interfaces, operating systems, and architectures. The main aim of this paper is to study the middleware technologies for CoT. Toward this end, we first present the main features and characteristics of middlewares. Next we study different architecture styles and service domains. Then we presents several middlewares that are suitable for CoT based platforms and lastly a list of current challenges and issues in design of CoT based middlewares is discussed.Comment: http://www.sciencedirect.com/science/article/pii/S2352864817301268, Digital Communications and Networks, Elsevier (2017

    Middleware Technologies for Cloud of Things - a survey

    Full text link
    The next wave of communication and applications rely on the new services provided by Internet of Things which is becoming an important aspect in human and machines future. The IoT services are a key solution for providing smart environments in homes, buildings and cities. In the era of a massive number of connected things and objects with a high grow rate, several challenges have been raised such as management, aggregation and storage for big produced data. In order to tackle some of these issues, cloud computing emerged to IoT as Cloud of Things (CoT) which provides virtually unlimited cloud services to enhance the large scale IoT platforms. There are several factors to be considered in design and implementation of a CoT platform. One of the most important and challenging problems is the heterogeneity of different objects. This problem can be addressed by deploying suitable "Middleware". Middleware sits between things and applications that make a reliable platform for communication among things with different interfaces, operating systems, and architectures. The main aim of this paper is to study the middleware technologies for CoT. Toward this end, we first present the main features and characteristics of middlewares. Next we study different architecture styles and service domains. Then we presents several middlewares that are suitable for CoT based platforms and lastly a list of current challenges and issues in design of CoT based middlewares is discussed.Comment: http://www.sciencedirect.com/science/article/pii/S2352864817301268, Digital Communications and Networks, Elsevier (2017

    Flexible provisioning of Web service workflows

    No full text
    Web services promise to revolutionise the way computational resources and business processes are offered and invoked in open, distributed systems, such as the Internet. These services are described using machine-readable meta-data, which enables consumer applications to automatically discover and provision suitable services for their workflows at run-time. However, current approaches have typically assumed service descriptions are accurate and deterministic, and so have neglected to account for the fact that services in these open systems are inherently unreliable and uncertain. Specifically, network failures, software bugs and competition for services may regularly lead to execution delays or even service failures. To address this problem, the process of provisioning services needs to be performed in a more flexible manner than has so far been considered, in order to proactively deal with failures and to recover workflows that have partially failed. To this end, we devise and present a heuristic strategy that varies the provisioning of services according to their predicted performance. Using simulation, we then benchmark our algorithm and show that it leads to a 700% improvement in average utility, while successfully completing up to eight times as many workflows as approaches that do not consider service failures

    Dynamic service composition for telecommunication services and its challenges

    Get PDF
    As communication networks have evolved towards IP (Internet Protocol) networks, telecommunication operators has expanded its reach to internet multimedia web content services while operating circuit-switch networks in parallel. With the adoption of SOA (Service Oriented Architecture) that enables service capability interfaces to be published and integrated with other service capabilities into new composite service, service composition allows telecommunication providers to accelerate more new services provisioning. From the perspective of telecommunication providers to deliver integrated composite service from different providers and different network protocols, this paper is aimed to present the current service composition based on middleware approaches; discuss the requirements of meeting the challenges; and compare the approaches

    Dynamic selection of redundant web services

    Get PDF
    In the domain of Web Services, it is not uncommon to find redundant services that provide functionalities to the clients. Services with the same functionality can be clustered into a group of redundant services. Respectively, if a service offers different functionalities, it belongs to more than one group. Having various Web Services that are able to handle the client's request suggests the necessity of a mechanism that selects the most appropriate Web Service at a given moment of time. This thesis presents an approach, Virtual Web Services Layer, for dynamic service selection based on virtualization on the server side. It helps managing redundant services in a transparent manner as well as allows adding services to the system at run-time. In addition, the layer assures a level of security since the consumers do not have direct access to the Web Services. Several selection techniques are applied to increase the performance of the system in terms of load-balancing, dependability, or execution time. The results of the experiments show which selection techniques are appropriate when different QoS criteria of the services are known and how the correctness of this information influences on the decision-making process

    QoS awareness and adaptation in service composition

    Get PDF
    The dynamic nature of a Web service execution environment generates frequent variations in the Quality of Service offered to the consumers, therefore, obtaining the expected results while running a composite service is not guaranteed. When combining this highly changing environment with the increasing emphasis on Quality of Service, management of composite services turns into a time consuming and complicated task. Different approaches and tools have been proposed to mitigate the impacts of unexpected events during the execution of composite services. Among them, self-adaptive proposals have stood out, since they aim to maintain functional and quality levels, by dynamically adapting composite services to the environment conditions, reducing human intervention. The research presented in this Thesis is centred on self-adaptive properties in service composition, mainly focused on self-optimization. Three models have been proposed to target self-optimization, considering various QoS parameters, the benefit of performing adaptation, and looking at adaptation from two perspectives: reactive and proactive. They target situations where the QoS of the composition is decreasing. Also, they consider situations where a number of the accumulated QoS values, in certain point of the process, are better than expected, providing the possibility of improving other QoS parameters. These approaches have been implemented in service composition frameworks and evaluated through the execution of test cases. Evaluation was performed by comparing the QoS values gathered from multiple executions of composite services, using the proposed optimization models and a non-adaptive approach. The benefit of adaptation was found a useful value during the decision making process, in order to determine if adaptation was needed or not. Results show that using optimization mechanisms when executing composite services provide significant improvements in the global QoS values of the compositions. Nevertheless, in some cases there is a trade-off, where one of the measured parameters shows an increment, in order to improve the others

    Model aware execution of composite web services

    Get PDF
    In the Service Oriented Architecture (SOA) services are computational elements that are published, discovered, consumed and aggregated across platform and organizational borders. The most commonly used technology to achieve SOA are Web Services (WSs). This is due to standardization process (WSDL, SOAP, UDDI standards) and a wide range of available infrastructure and tools. A very interesting aspect of WSs is their composeability. WSs can be easily aggregated into complex workflows, called Composite Web Services (CWSs). These compositions of services enable further reuse and in this way new, even more complex, systems are built.Although there are many languages to specify or implement workflows, in the service-oriented systems BPEL (Business Process Execution Language) is widely accepted. With this language WSs are orchestrated and then executed with specialized engines (like ActiveBPEL). While being very popular, BPEL has certain limitations in monitoring and optimizing executions of CWSs. It is very hard with this language to adapt CWSs to changes in the performance of used WSs, and also to select the optimal way to execute a CWS. To overcome the limitations of BPEL, I present a model-aware approach to execute CWSs. To achieve the model awareness the Coloured Petri Nets (CPN) formalism is considered as the basis of the execution of CWSs. This is different than other works in using formal methods in CWSs, which are restricted to purposes like verification or checking of correctness. Here the formal and unambiguous notation of the CPN is used to model, analyze, execute and monitor CWSs. Furthermore this approach to execute CWSs, which is based on the CPN formalism, is implemented in the model-aware middleware. It is also demonstrated how the middleware improves the performance and reliability of CWSs

    A model-based approach for automatic recovery from memory leaks in enterprise applications

    Get PDF
    Large-scale distributed computing systems such as data centers are hosted on heterogeneous and networked servers that execute in a dynamic and uncertain operating environment, caused by factors such as time-varying user workload and various failures. Therefore, achieving stringent quality-of-service goals is a challenging task, requiring a comprehensive approach to performance control, fault diagnosis, and failure recovery. This work presents a model-based approach for fault management, which integrates limited lookahead control (LLC), diagnosis, and fault-tolerance concepts that: (1) enables systems to adapt to environment variations, (2) maintains the availability and reliability of the system, (3) facilitates system recovery from failures. We focused on memory leak errors in this thesis. A characterization function is designed to detect memory leaks. Then, a LLC is applied to enable the computing system to adapt efficiently to variations in the workload, and to enable the system recover from memory leaks and maintain functionality

    Combining Mobile Agents and Process-based Coordination to Achieve Software Adaptation

    Get PDF
    We have developed a model and a platform for end-to-end run-time monitoring, behavior and performance analysis, and consequent dynamic adaptation of distributed applications. This paper concentrates on how we coordinate and actuate the potentially multi-part adaptation, operating externally to the target systems, that is, without requiring any a priori built-in adaptation facilities on the part of said target systems. The actual changes are performed on the fly onto the target by communities of mobile software agents, coordinated by a decentralized process engine. These changes can be coarse-grained, such as replacing entire components or rearranging the connections among components, or fine-grained, such as changing the operational parameters, internal state and functioning logic of individual components. We discuss our successful experience using our approach in dynamic adaptation of a large-scale commercial application, which requires both coarse and fine grained modifications
    corecore