444 research outputs found

    Smart Asset Management for Electric Utilities: Big Data and Future

    Full text link
    This paper discusses about future challenges in terms of big data and new technologies. Utilities have been collecting data in large amounts but they are hardly utilized because they are huge in amount and also there is uncertainty associated with it. Condition monitoring of assets collects large amounts of data during daily operations. The question arises "How to extract information from large chunk of data?" The concept of "rich data and poor information" is being challenged by big data analytics with advent of machine learning techniques. Along with technological advancements like Internet of Things (IoT), big data analytics will play an important role for electric utilities. In this paper, challenges are answered by pathways and guidelines to make the current asset management practices smarter for the future.Comment: 13 pages, 3 figures, Proceedings of 12th World Congress on Engineering Asset Management (WCEAM) 201

    Design and optimization of optical grids and clouds

    Get PDF

    Cloud computing as business perspectives for product lifecycle management systems

    Get PDF
    In a dynamic economic environment, the company’s survival may depend on the ability to focus on core business and quick adaptation. Yesterday’s profitable business model can’t be counted on to translate into future growth and profits. As the business adapts to changing government and industry regulations, evaluates new business partnerships and anticipates competitive threats, IT needs to help the business find new ways to respond of such of fastchanges. At the same time, plans for change must often be made in the context of limited resources for finances, people, technology, and power

    Performance and efficiency optimization of multi-layer IoT edge architecture

    Get PDF
    Abstract. Internet of Things (IoT) has become a backbone technology that connects together various devices with diverse capabilities. It is a technology, which enables ubiquitously available digital services for end-users. IoT applications for mission-critical scenarios need strict performance indicators such as of latency, scalability, security and privacy. To fulfil these requirements, IoT also requires support from relevant enabling technologies, such as cloud, edge, virtualization and fifth generation mobile communication (5G) technologies. For Latency-critical applications and services, long routes between the traditional cloud server and end-devices (sensors /actuators) is not a feasible approach for computing at these data centres, although these traditional clouds provide very high computational and storage for current IoT system. MEC model can be used to overcome this challenge, which brings the CC computational capacity within or next on the access network base stations. However, the capacity to perform the most critical processes at the local network layer is often necessary to cope with the access network issues. Therefore, this thesis compares the two existing IoT models such as traditional cloud-IoT model, a MEC-based edge-cloud-IoT model, with proposed local edge-cloud-IoT model with respect to their performance and efficiency, using iFogSim simulator. The results consolidate our research team’s previous findings that utilizing the three-tier edge-IoT architecture, capable of optimally utilizing the computational capacity of each of the three tiers, is an effective measure to reduce energy consumption, improve end-to-end latency and minimize operational costs in latency-critical It applications

    Performance Analysis of Hadoop MapReduce And Apache Spark for Big Data

    Get PDF
    In the recent era, information has evolved at an exponential rate. In order to obtain new insights, this information must be carefully interpreted and analyzed. There is, therefore, a need for a system that can process data efficiently all the time. Distributed cloud computing data processing platforms are important tools for data analytics on a large scale. In this area, Apache Hadoop (High-Availability Distributed Object-Oriented Platform) MapReduce has evolved as the standard. The MapReduce job reads, processes its input data and then returns it to Hadoop Distributed Files Systems (HDFS). Although there is limitation to its programming interface, this has led to the development of modern data flow-oriented frameworks known as Apache Spark, which uses Resilient Distributed Datasets (RDDs) to execute data structures in memory. Since RDDs can be stored in the memory, algorithms can iterate very efficiently over its data many times. Cluster computing is a major investment for any organization that chooses to perform Big Data Analysis. The MapReduce and Spark were indeed two famous open-source cluster-computing frameworks for big data analysis. Cluster computing hides the task complexity and low latency with simple user-friendly programming. It improves performance throughput, and backup uptime should the main system fail. Its features include flexibility, task scheduling, higher availability, and faster processing speed. Big Data analytics has become more computer-intensive as data management becomes a big issue for scientific computation. High-Performance Computing is undoubtedly of great importance for big data processing. The main application of this research work is towards the realization of High-Performance Computing (HPC) for Big Data Analysis. This thesis work investigates the processing capability and efficiency of Hadoop MapReduce and Apache Spark using Cloudera Manager (CM). The Cloudera Manager provides end-to-end cluster management for Cloudera Distribution for Apache Hadoop (CDH). The implementation was carried out with Amazon Web Services (AWS). Amazon Web Service is used to configure window Virtual Machine (VM). Four Linux In-stances of free tier eligible t2.micro were launched using Amazon Elastic Compute Cloud (EC2). The Linux Instances were configured into four cluster nodes using Secure Socket Shell (SSH). A Big Data application is generated and injected while both MapReduce and Spark job are run with different queries such as scan, aggregation, two way and three-way join. The time taken for each task to be completed are recorded, observed, and thoroughly analyzed. It was observed that Spark executes job faster than MapReduce

    LiDAR aided simulation pipeline for wireless communication in vehicular traffic scenarios

    Get PDF
    Abstract. Integrated Sensing and Communication (ISAC) is a modern technology under development for Sixth Generation (6G) systems. This thesis focuses on creating a simulation pipeline for dynamic vehicular traffic scenarios and a novel approach to reducing wireless communication overhead with a Light Detection and Ranging (LiDAR) based system. The simulation pipeline can be used to generate data sets for numerous problems. Additionally, the developed error model for vehicle detection algorithms can be used to identify LiDAR performance with respect to different parameters like LiDAR height, range, and laser point density. LiDAR behavior on traffic environment is provided as part of the results in this study. A periodic beam index map is developed by capturing antenna azimuth and elevation angles, which denote maximum Reference Signal Receive Power (RSRP) for a simulated receiver grid on the road and classifying areas using Support Vector Machine (SVM) algorithm to reduce the number of Synchronization Signal Blocks (SSBs) that are needed to be sent in Vehicle to Infrastructure (V2I) communication. This approach effectively reduces the wireless communication overhead in V2I communication

    Performance analysis of blockchain-based smart grid with Ethereum and Hyperledger implementations

    Get PDF
    Abstract. Smart grids lay the foundation for future communities. Smart homes, smart buildings, smart streets, and smart offices are built when intelligent devices piles on intelligent devices. To reach the maximum capacity, they all must be supported by an intelligent power supply. For optimal and real-time electricity consumption, monitoring and trading, blockchain possess number of potential benefits in its application to electricity infrastructure. A comprehensive system architecture of blockchain-based smart grid is proposed and peer-to-peer (P2P) energy trading is implemented between Distribution System Operators (DSO), Local energy providers and Consumers. This thesis presents a virtual smart grid equipped with smart contracts capable of virtual activities like market payment function and the comparison and the performance of the blockchain-based smart grid by using Ethereum and Hyperledger Fabric-based implementations. The challenges faced during the implementation of blockchain protocols are discussed and evaluation in the light of finding sustainable solutions to develop secure and reliable smart grid operations, is the major objective of the thesis

    Integration and characterisation of the performance of fifth-generation mobile technology (5g) connectivity over the University of Oulu 5g test network (5gtn) for cognitive edge node based on fractal edge platform

    Get PDF
    Abstract. In recent years, there has been a growing interest in cognitive edge nodes, which are intelligent devices that can collect and process data at the edge of the network. These nodes are becoming increasingly important for various applications such as smart cities, industrial automation, and healthcare. However, implementing cognitive edge nodes requires a reliable and efficient communication network. Therefore, this thesis assesses the performance of direct cellular (5G) and IEEE 802.11-based Wireless Local Area Network (WLAN) technology for three network architectures, which has the potential to offer low-latency, high-throughput and energy-efficient communication, for cognitive edge nodes. The study focused on evaluating the network performance metrics of throughput, latency, and power consumption for three different FRACTAL-based network architectures. These architectures include IEEE 802.11-based last mile, direct cellular (5G) backbone, and IEEE 802.11-based last mile over cellular (5G) backbone topologies. This research aims to provide insights into the performance of 5G technology for cognitive edge nodes. The findings suggest that the power consumption of IEEE 802.11-enabled nodes was only slightly higher than the reference case, indicating that it is more energy-efficient than 5G-enabled nodes. Additionally, in terms of latency, IEEE 802.11 technology may be more favourable. The throughput tests revealed that the cellular (5G) connection exhibited high throughput for communication between a test node and an upper-tier node situated either on the internet or at the network edge. In addition, it was found that the FRACTAL edge platform is flexible and scalable, and it supports different wireless technologies, making it a suitable platform for implementing cognitive edge nodes. Overall, this study provides insights into the potential of 5G technology and the FRACTAL edge platform for implementing cognitive edge nodes. The results of this research can be valuable for researchers and practitioners working in the field of wireless communication and edge computing, as it sheds light on the feasibility and performance of these technologies for implementing cognitive edge nodes in various applications

    Turku Centre for Computer Science – Annual Report 2013

    Get PDF
    Due to a major reform of organization and responsibilities of TUCS, its role, activities, and even structures have been under reconsideration in 2013. The traditional pillar of collaboration at TUCS, doctoral training, was reorganized due to changes at both universities according to the renewed national system for doctoral education. Computer Science and Engineering and Information Systems Science are now accompanied by Mathematics and Statistics in newly established doctoral programs at both University of Turku and &Aring;bo Akademi University. Moreover, both universities granted sufficient resources to their respective programmes for doctoral training in these fields, so that joint activities at TUCS can continue. The outcome of this reorganization has the potential of proving out to be a success in terms of scientific profile as well as the quality and quantity of scientific and educational results.&nbsp; International activities that have been characteristic to TUCS since its inception continue strong. TUCS&rsquo; participation in European collaboration through EIT ICT Labs Master&rsquo;s and Doctoral School is now more active than ever. The new double degree programs at MSc and PhD level between University of Turku and Fudan University in Shaghai, P.R.China were succesfully set up and are&nbsp; now running for their first year. The joint students will add to the already international athmosphere of the ICT House.&nbsp; The four new thematic reseach programmes set up acccording to the decision by the TUCS Board have now established themselves, and a number of events and other activities saw the light in 2013. The TUCS Distinguished Lecture Series managed to gather a large audience with its several prominent speakers. The development of these and other research centre activities continue, and&nbsp; new practices and structures will be initiated to support the tradition of close academic collaboration.&nbsp; The TUCS&rsquo; slogan Where Academic Tradition Meets the Exciting Future has proven true throughout these changes. Despite of the dark clouds on the national and European economic sky, science and higher education in the field have managed to retain all the key ingredients for success. Indeed, the future of ICT and Mathematics in Turku seems exciting.</p
    • …
    corecore