677 research outputs found

    Evaluating XMPP Communication in IEC 61499-based Distributed Energy Applications

    Full text link
    The IEC 61499 reference model provides an international standard developed specifically for supporting the creation of distributed event-based automation systems. Functionality is abstracted into function blocks which can be coded graphically as well as via a text-based method. As one of the design goals was the ability to support distributed control applications, communication plays a central role in the IEC 61499 specification. In order to enable the deployment of functionality to distributed platforms, these platforms need to exchange data in a variety of protocols. IEC 61499 realizes the support of these protocols via "Service Interface Function Blocks" (SIFBs). In the context of smart grids and energy applications, IEC 61499 could play an important role, as these applications require coordinating several distributed control logics. Yet, the support of grid-related protocols is a pre-condition for a wide-spread utilization of IEC 61499. The eXtensible Messaging and Presence Protocol (XMPP) on the other hand is a well-established protocol for messaging, which has recently been adopted for smart grid communication. Thus, SIFBs for XMPP facilitate distributed control applications, which use XMPP for exchanging all control relevant data, being realized with the help of IEC 61499. This paper introduces the idea of integrating XMPP into SIFBs, demonstrates the prototypical implementation in an open source IEC 61499 platform and provides an evaluation of the feasibility of the result.Comment: 2016 IEEE 21st International Conference on Emerging Technologies and Factory Automation (ETFA

    Emulating opportunistic networks with KauNet Triggers

    Get PDF
    In opportunistic networks the availability of an end-to-end path is no longer required. Instead opportunistic networks may take advantage of temporary connectivity opportunities. Opportunistic networks present a demanding environment for network emulation as the traditional emulation setup, where application/transport endpoints only send and receive packets from the network following a black box approach, is no longer applicable. Opportunistic networking protocols and applications additionally need to react to the dynamics of the underlying network beyond what is conveyed through the exchange of packets. In order to support IP-level emulation evaluations of applications and protocols that react to lower layer events, we have proposed the use of emulation triggers. Emulation triggers can emulate arbitrary cross-layer feedback and can be synchronized with other emulation effects. After introducing the design and implementation of triggers in the KauNet emulator, we describe the integration of triggers with the DTN2 reference implementation and illustrate how the functionality can be used to emulate a classical DTN data-mule scenario

    Interfacing IEC 61850-9-2 Process Bus Data to a Simulation Environment

    Get PDF
    IEC 61850 – Communication and networks in substations is the standard for building communication infrastructure between the different Intelligent Electronic devices (IEDs) in the substation automation system. It consists of several parts which include Specific Communication and Service Mapping for the transmission of sampled values (defined in part 9–2 of the standard). The Sampled value communication is a high speed, time critical Ethernet based communication for the transfer of data over the network. It defines the sampling rate and time synchronization requirement of the system. The main purpose of this thesis is to extract sampled value data (four voltages, four currents) from a PCAP data file captured over the network in the ‘Sundom Smart Grid’ environment and convert the data into the format needed for analysis on PSCAD simulation tool. This thesis serves as an interface between the real Smart Grid environment and the test environment in the University of Vaasa. This thesis explains fundamental concepts that relate to IEC 61850, and the Sampled Value in particular. It describes the frame structure of sampled value and a software application has been developed based on WinPcap Application Program Interface (API) to extract the data points needed and fulfill the data format requirement of the PSCAD which is adaptable for use in MATLAB.fi=Opinnäytetyö kokotekstinä PDF-muodossa.|en=Thesis fulltext in PDF format.|sv=Lärdomsprov tillgängligt som fulltext i PDF-format

    Big Data Application and System Co-optimization in Cloud and HPC Environment

    Get PDF
    The emergence of big data requires powerful computational resources and memory subsystems that can be scaled efficiently to accommodate its demands. Cloud is a new well-established computing paradigm that can offer customized computing and memory resources to meet the scalable demands of big data applications. In addition, the flexible pay-as-you-go pricing model offers opportunities for using large scale of resources with low cost and no infrastructure maintenance burdens. High performance computing (HPC) on the other hand also has powerful infrastructure that has potential to support big data applications. In this dissertation, we explore the application and system co-optimization opportunities to support big data in both cloud and HPC environments. Specifically, we explore the unique features of both application and system to seek overlooked optimization opportunities or tackle challenges that are difficult to be addressed by only looking at the application or system individually. Based on the characteristics of the workloads and their underlying systems to derive the optimized deployment and runtime schemes, we divide the workflow into four categories: 1) memory intensive applications; 2) compute intensive applications; 3) both memory and compute intensive applications; 4) I/O intensive applications.When deploying memory intensive big data applications to the public clouds, one important yet challenging problem is selecting a specific instance type whose memory capacity is large enough to prevent out-of-memory errors while the cost is minimized without violating performance requirements. In this dissertation, we propose two techniques for efficient deployment of big data applications with dynamic and intensive memory footprint in the cloud. The first approach builds a performance-cost model that can accurately predict how, and by how much, virtual memory size would slow down the application and consequently, impact the overall monetary cost. The second approach employs a lightweight memory usage prediction methodology based on dynamic meta-models adjusted by the application's own traits. The key idea is to eliminate the periodical checkpointing and migrate the application only when the predicted memory usage exceeds the physical allocation. When applying compute intensive applications to the clouds, it is critical to make the applications scalable so that it can benefit from the massive cloud resources. In this dissertation, we first use the Kirchhoff law, which is one of the most widely used physical laws in many engineering principles, as an example workload for our study. The key challenge of applying the Kirchhoff law to real-world applications at scale lies in the high, if not prohibitive, computational cost to solve a large number of nonlinear equations. In this dissertation, we propose a high-performance deep-learning-based approach for Kirchhoff analysis, namely HDK. HDK employs two techniques to improve the performance: (i) early pruning of unqualified input candidates which simplify the equation and select a meaningful input data range; (ii) parallelization of forward labelling which execute steps of the problem in parallel. When it comes to both memory and compute intensive applications in clouds, we use blockchain system as a benchmark. Existing blockchain frameworks exhibit a technical barrier for many users to modify or test out new research ideas in blockchains. To make it worse, many advantages of blockchain systems can be demonstrated only at large scales, which are not always available to researchers. In this dissertation, we develop an accurate and efficient emulating system to replay the execution of large-scale blockchain systems on tens of thousands of nodes in the cloud. For I/O intensive applications, we observe one important yet often neglected side effect of lossy scientific data compression. Lossy compression techniques have demonstrated promising results in significantly reducing the scientific data size while guaranteeing the compression error bounds, but the compressed data size is often highly skewed and thus impact the performance of parallel I/O. Therefore, we believe it is critical to pay more attention to the unbalanced parallel I/O caused by lossy scientific data compression

    Modelling Indoor Air Quality Using Sensor Data and Machine Learning Methods

    Get PDF
    Ubiquitous sensing is transforming our societies and how we interact with our surrounding envi- ronment; sensors provide large streams of data while machine learning techniques and artificial intelligence provide the tools needed to generate insights from the data. These developments have taken place in almost every industry sector with topics such as smart cities and smart buildings becoming key topical issues as societies seek more sustainable ways of living. Smart buildings are the main context of this thesis. These are buildings equipped with various sensors used to collect data from the surrounding environment allowing the building to adapt itself and increasing its operational efficiency. Previously, most efforts in realizing smart buildings have focused on energy management and au- tomation where the goal is to improve costs associated with heating, ventilation, and air condi- tioning. A less studied area involves smart buildings and their indoor environments especially relative to sub-spaces within a building. Increased developments in low-cost sensor technologies have created new opportunities to sense indoor environments in more granular ways that provide new possibilities to model finer attributes of spaces within a building. This thesis focuses on modeling indoor environment data obtained from a multipurpose building that serves primarily as a school. The aim is to explore the quality of the indoor environment relative to regulatory guidelines and also exploring suitable predictive models for thermal comfort and indoor air quality. Additionally, design science methodology is applied in the creation of a proof of concept software system. This system is aimed at demonstrating the use of Web APIs to provide sensor data to clients that may use the data to render analytics among other insights to a building’s stakeholders. Overall, the main technical contributions of this thesis are twofold: (i) a potential web-application design for indoor air quality IoT data and (ii) an exposition of modeling of indoor air quality data based on a variety of sensors and multiple spaces within the same building. Results indicate a software-based tool that supports monitoring the indoor environment of a building would be beneficial in maintaining the correct levels of various indoor parameters. Further, modeling data from different spaces within the building shows a need for heterogeneous models to predict variables in these spaces. This implies parameters used to predict thermal comfort and air quality are different in varying spaces especially where the spaces differ in size, indoor climate control settings, and other attributes such as occupancy control

    The V-Network testbed for malware analysis

    Get PDF
    This paper presents a virtualised network environment that serves as a stable and re-usable platform for the analysis of malware propagation. The platform, which has been developed using VMware virtualisation technology, enables the use of either a graphical user interface or scripts to create virtual networks, clone, restart and take snapshots of virtual machines, reset experiments, clean virtual machines and manage the entire infrastructure remotely. The virtualised environment uses open source routing software to support the deployment of intrusion detection systems and other malware attack sensors, and is therefore suitable for evaluating countermeasure systems before deployment on live networks. An empirical analysis of network worm propagation has been conducted using worm outbreak experiments on Class A size networks to demonstrate the capability of the developed platform

    Studies on Hydrodynamic Propulsion of a Biomimetic Tuna

    Get PDF

    The V-network: a testbed for malware analysis

    Get PDF
    This paper presents a virtualised network environment that serves as a stable and re-usable platform for the analysis of malware propagation. The platform, which has been developed using VMware virtualisation technology, enables the use of either a graphical user interface or scripts to create virtual networks, clone, restart and take snapshots of virtual machines, reset experiments, clean virtual machines and manage the entire infrastructure remotely. The virtualised environment uses open source routing software to support the deployment of intrusion detection systems and other malware attack sensors, and is therefore suitable for evaluating countermeasure systems before deployment on live networks. An empirical analysis of network worm propagation has been conducted using worm outbreak experiments on Class A size networks to demonstrate the capability of the developed platform
    corecore