97 research outputs found

    On Data Dissemination for Large-Scale Complex Critical Infrastructures

    Get PDF
    Middleware plays a key role for the achievement of the mission of future largescalecomplexcriticalinfrastructures, envisioned as federations of several heterogeneous systems over Internet. However, available approaches for datadissemination result still inadequate, since they are unable to scale and to jointly assure given QoS properties. In addition, the best-effort delivery strategy of Internet and the occurrence of node failures further exacerbate the correct and timely delivery of data, if the middleware is not equipped with means for tolerating such failures. This paper presents a peer-to-peer approach for resilient and scalable datadissemination over large-scalecomplexcriticalinfrastructures. The approach is based on the adoption of epidemic dissemination algorithms between peer groups, combined with the semi-active replication of group leaders to tolerate failures and assure the resilient delivery of data, despite the increasing scale and heterogeneity of the federated system. The effectiveness of the approach is shown by means of extensive simulation experiments, based on Stochastic Activity Networks

    CoAP Infrastructure for IoT

    Get PDF
    The Internet of Things (IoT) can be seen as a large-scale network of billions of smart devices. Often IoT devices exchange data in small but numerous messages, which requires IoT services to be more scalable and reliable than ever. Traditional protocols that are known in the Web world does not fit well in the constrained environment that these devices operate in. Therefore many lightweight protocols specialized for the IoT have been studied, among which the Constrained Application Protocol (CoAP) stands out for its well-known REST paradigm and easy integration with existing Web. On the other hand, new paradigms such as Fog Computing emerges, attempting to avoid the centralized bottleneck in IoT services by moving computations to the edge of the network. Since a node of the Fog essentially belongs to relatively constrained environment, CoAP fits in well. Among the many attempts of building scalable and reliable systems, Erlang as a typical concurrency-oriented programming (COP) language has been battle tested in the telecom industry, which has similar requirements as the IoT. In order to explore the possibility of applying Erlang and COP in general to the IoT, this thesis presents an Erlang based CoAP server/client prototype ecoap with a flexible concurrency model that can scale up to an unconstrained environment like the Cloud and scale down to a constrained environment like an embedded platform. The flexibility of the presented server renders the same architecture applicable from Fog to Cloud. To evaluate its performance, the proposed server is compared with the mainstream CoAP implementation on an Amazon Web Service (AWS) Cloud instance and a Raspberry Pi 3, representing the unconstrained and constrained environment respectively. The ecoap server achieves comparable throughput, lower latency, and in general scales better than the other implementation in the Cloud and on the Raspberry Pi. The thesis yields positive results and demonstrates the value of the philosophy of Erlang in the IoT space

    Optimasi Pemilihan Child Broker(s) Pada Model Komunikasi Publish/subscribe Pada Protokol Data Distribution Service Di Area Multi-zone

    Full text link
    Tantangan terbesar yang muncul pada data center cloud computing adalah meningkatnya biaya konsumsi daya. Pengembangan data center akan bertolak belakang dengan penghematan daya, semakin tinggi performa sebuah data center, maka semakin tinggi pula konsumsi energi yang dibutuhkan, hal ini disebabkan oleh kebutuhan jumlah server ataupun hardware pada data center yang semakin meningkat. Data center cloud computing yang berbasis High Performance Computing (HPC) merupakan sebuah teknologi yang dibangun dari kumpulan server dalam jumlah besar untuk menjamin ketersediaan tinggi dari sebuah cloud computing, namun sebenarnya beberapa server tersebut hanya direncanakan untuk beban puncak yang jarang atau tidak pernah ter-jadi. Ketika beban pada titik terendah, maka server tersebut akan berada dalam kondisi idle. Optimasi daya dengan DNS (Dynamics Shutdown) dengan memanfaatkan kondisi beban rendah server dapat menjadi solusi yang tepat untuk mengurangi konsumsi daya pada data center. Namun jika optimasi tersebut dilakukan dengan konvensional dan hanya berdasarkan data realtime, maka kemungkinan besar akan berpengaruh terhadap performa data center. Optimasi yang dilakukan pada penelitian ini adalah dengan metode prediksi menggunakan moving average untuk menentukan penjadwalan DNS. Hasil pengujian dengan komputer virtual menunjukkan bahwa dengan metode prediksi dapat mengurangi konsumsi daya sebesar 1,14 Watt dibandingkan dengan metode konvensional

    Quality of Service in Distributed Stream Processing for large scale Smart Pervasive Environments

    Get PDF
    The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs

    Optimasi Pemilihan Child Broker(s) pada Model Komunikasi Publish/Subscribe pada Protokol Data Distribution Service di Area Multi-Zone

    Get PDF
    Teknologi yang ada kini memunculkan sebuah paradigma bahwa mesin-mesin dan perangkat-perangkat pendukung yang ada harus dibangun dan dihubungkan sedemikian rupa agar tiap komponen dalam sistem dapat berinteraksi satu sama lain secara bebas dan lepas(loosely coupled) dan dapat menangani tantangan skalabilitas. Proses bisnis yang terjadi harus didistribusikan kepada beberapa backend, agar sistem tahan terhadap kegagalan atau perubahan yang terjadi. Mekanisme komunikasi Publish/Subscribe merupakan mekanisme yang sesuai untuk domain permasalahan ini, karena mekanisme ini menyediakan fitur untuk menangani skalabilitas dan prosedur pengiriman data secara terpisah. OMG (Object Management Group) DDS (Data Distribution Service) merupakan sebuah spesifikasi standard dari data centric publish/subscribe middleware dengan banyak parameter QoS untuk memenuhi kebutuhan komunikasi yang ada. Untuk mengoptimalkan kinerja DDS yang hanya bisa digunakan untuk komunikasi satu zona saja, maka beberapa penelitian telah dilakukan dengan menemukan mekanisme agar DDS dapat diimplementasikan pada area multi-zone. Beberapa penelitian yang ada menggunakan broker sebagai jembatan komunikasi antar zona. Akan tetapi penelitian-penelitian tersebut tidak membahas bagaimana mekanisme yang dilakukan agar node yang digunakan sebagai broker tersebut tidak mengalami kegagalan karena overload. ====================================================================================================== The existing technology has now led to a paradigm that machines and supporting devices should be constructed and connected in such a way that each component in the system can interact each other in a freely and loosely coupled way, and can handle the challenges of scalability. The business processes must be distributed to the multiple back end, so that the system is resistant to failures or changes. Publish/Subscribe is an appropriate paradigm for this problem domain, because it provides a mechanism to handle the scalability features and data delivery procedures separately. OMG(Object Management Group) DDS(Data Distribution Service) is a standard specification of a data centric publish/subscribe middleware with QoS parameters to meet the needs of communication. To optimize the performance of DDS that can only be used for communication in one zone, several studies have been conducted to find a mechanism, so that DDS can be implemented in a multi-zone area. Some existing research using a broker as a bridge of communication between the zones. However, these studies do not discuss how the mechanism is carried out so that the node used as the broker does not experience failures due to overload

    OPTIMASI PEMILIHAN CHILD BROKER(S) PADA MODEL KOMUNIKASI PUBLISH/SUBSCRIBE PADA PROTOKOL DATA DISTRIBUTION SERVICE DI AREA MULTI-ZONE

    Get PDF
    Tantangan terbesar yang muncul pada data center cloud computing adalah meningkatnya biaya konsumsi daya. Pengembangan data center akan bertolak belakang dengan penghematan daya, semakin tinggi performa sebuah data center, maka semakin tinggi pula konsumsi energi yang dibutuhkan, hal ini disebabkan oleh kebutuhan jumlah server ataupun hardware pada data center yang semakin meningkat. Data center cloud computing yang berbasis High Performance Computing (HPC) merupakan sebuah teknologi yang dibangun dari kumpulan server dalam jumlah besar untuk menjamin ketersediaan tinggi dari sebuah cloud computing, namun sebenarnya beberapa server tersebut hanya direncanakan untuk beban puncak yang jarang atau tidak pernah ter-jadi. Ketika beban pada titik terendah, maka server tersebut akan berada dalam kondisi idle. Optimasi daya dengan DNS (Dynamics Shutdown) dengan memanfaatkan kondisi beban rendah server dapat menjadi solusi yang tepat untuk mengurangi konsumsi daya pada data center. Namun jika optimasi tersebut dilakukan dengan konvensional dan hanya berdasarkan data realtime, maka kemungkinan besar akan berpengaruh terhadap performa data center. Optimasi yang dilakukan pada penelitian ini adalah dengan metode prediksi menggunakan moving average untuk menentukan penjadwalan DNS. Hasil pengujian dengan komputer virtual menunjukkan bahwa dengan metode prediksi dapat mengurangi konsumsi daya sebesar 1,14 Watt dibandingkan dengan metode konvensional

    Co-design of Security Aware Power System Distribution Architecture as Cyber Physical System

    Get PDF
    The modern smart grid would involve deep integration between measurement nodes, communication systems, artificial intelligence, power electronics and distributed resources. On one hand, this type of integration can dramatically improve the grid performance and efficiency, but on the other, it can also introduce new types of vulnerabilities to the grid. To obtain the best performance, while minimizing the risk of vulnerabilities, the physical power system must be designed as a security aware system. In this dissertation, an interoperability and communication framework for microgrid control and Cyber Physical system enhancements is designed and implemented taking into account cyber and physical security aspects. The proposed data-centric interoperability layer provides a common data bus and a resilient control network for seamless integration of distributed energy resources. In addition, a synchronized measurement network and advanced metering infrastructure were developed to provide real-time monitoring for active distribution networks. A hybrid hardware/software testbed environment was developed to represent the smart grid as a cyber-physical system through hardware and software in the loop simulation methods. In addition it provides a flexible interface for remote integration and experimentation of attack scenarios. The work in this dissertation utilizes communication technologies to enhance the performance of the DC microgrids and distribution networks by extending the application of the GPS synchronization to the DC Networks. GPS synchronization allows the operation of distributed DC-DC converters as an interleaved converters system. Along with the GPS synchronization, carrier extraction synchronization technique was developed to improve the system’s security and reliability in the case of GPS signal spoofing or jamming. To improve the integration of the microgrid with the utility system, new synchronization and islanding detection algorithms were developed. The developed algorithms overcome the problem of SCADA and PMU based islanding detection methods such as communication failure and frequency stability. In addition, a real-time energy management system with online optimization was developed to manage the energy resources within the microgrid. The security and privacy were also addressed in both the cyber and physical levels. For the physical design, two techniques were developed to address the physical privacy issues by changing the current and electromagnetic signature. For the cyber level, a security mechanism for IEC 61850 GOOSE messages was developed to address the security shortcomings in the standard

    OPTIMASI PEMILIHAN CHILD BROKER(S) PADA MODEL KOMUNIKASI PUBLISH/SUBSCRIBE PADA PROTOKOL DATA DISTRIBUTION SERVICE DI AREA MULTI-ZONE

    Full text link
    corecore