134 research outputs found

    PENGEMBANGAN LINGKUNGAN DIGITAL BERBASIS ADAPTIVE QoS UNTUK MENDUKUNG PROSES PEMBELAJARAN DI KAMPUS

    Get PDF
    Penelitian ini mengembangkan model lingkungan digital dalam proses pembelajaran dengan mempertimbangkan aspek Quality of Service (QoS) akses Internet yang tersedia. Terkait penyediaan sumberdaya yang memadai dalam melakukan akses ke konten pembelajaran, kondisi ketersediaan bandwidth diperlukan untuk menentukan layanan yang dapat disediakan oleh sistem kepada partisipan. Konten pembelajaran yang berbasis multimedia memerlukan konsumsi bandwitdh yang lebih tinggi dibandingkan dengan konten berbasis teks. Hal ini akan menimbulkan masalah ketika ketersediaan bandwidth dalam jaringan bersifat fluktuatif. Metode yang digunakan dalam penelitian ini dilakukan mengikuti metodologi rekayasa perangkat lunak dengan pendekatan object-oriented. Tahapan pengembangan meliputi: (i) Identifikasi kebutuhan; (ii) Analisis model konseptual; (iii) Desain model akses untuk lingkungan digital berbasis QoS adaptif yang menghasilkan cetak biru penelitian; (iv) Implementasi berupa pengembangan prototipe yang menunjukkan bahwa desain dapat diimplementasikan untuk mengatasi masalah ketidakhandalan koneksi Internet dalam pembelajaran kolaboratif, (v) Pengujian terhadap model yang dikembangkan dilakukan melalui serangkaian skenario pada skala simulasi laboratorium. Target penelitian diharapkan (a) identifikasi persyaratan tingkat QoS (kebutuhan bandwidth) untuk aplikasi-aplikasi yang digunakan dalam proses pembelajaran, (b) identifikasi parameter-parameter sebagai dasar pengukuran preferensi terhadap tingkat QoS yang diperlukan oleh partisipan pembelajaran, (c) rancangan model lingkungan digital, (d) pengembangan kerangka kerja (framework) akses berbasis QoS adaptif, (e) Integrasi kerangka kerja DLE pada aplikasi-aplikasi pembelajaran, (f) melakukan publikasi hasil penelitian melalui seminar dan jurnal ilmiah

    Designing Incentives Enabled Decentralized User Data Sharing Framework

    Get PDF
    Data sharing practices are much needed to strike a balance between user privacy, user experience, and profit. Different parties collect user data, for example, companies offering apps, social networking sites, and others, whose primary motive is an enhanced business model while giving optimal services to the end-users. However, the collection of user data is associated with serious privacy and security issues. The sharing platform also needs an effective incentive mechanism to realize transparent access to the user data while distributing fair incentives. The emerging literature on the topic includes decentralized data sharing approaches. However, there has been no universal method to track who shared what, to whom, when, for what purpose and under what condition in a verifiable manner until recently, when the distributed ledger technologies emerged to become the most effective means for designing a decentralized peer-to-peer network. This Ph.D. research includes an engineering approach for specifying the operations for designing incentives and user-controlled data-sharing platforms. The thesis presents a series of empirical studies and proposes novel blockchains- and smart contracts-based DUDS (Decentralized User Data Sharing) framework conceptualizing user-controlled data sharing practices. The DUDS framework supports immutability, authenticity, enhanced security, trusted records and is a promising means to share user data in various domains, including among researchers, customer data in e-commerce, tourism applications, etc. The DUDS framework is evaluated via performance analyses and user studies. The extended Technology Acceptance Model and a Trust-Privacy-Security Model are used to evaluate the usability of the DUDS framework. The evaluation allows uncovering the role of different factors affecting user intention to adopt data-sharing platforms. The results of the evaluation point to guidelines and methods for embedding privacy, user transparency, control, and incentives from the start in the design of a data-sharing framework to provide a platform that users can trust to protect their data while allowing them to control it and share it in the ways they want

    Service Level Agreements in Cloud Computing and Big Data

    Get PDF
    Now-a-days Most of the industries are having large volumes of data. Data has range of Tera bytes to Peta byte. Organizations are looking to handle the growth of data. Enterprises are using cloud deployments to address the big data and analytics with respect to the interaction between cloud and big data. This paper presents big data issues and research directions towards the ongoing work of processing of big data in the distributed environments

    Energy and Performance: Management of Virtual Machines: Provisioning, Placement, and Consolidation

    Get PDF
    Cloud computing is a new computing paradigm that oļ¬€ers scalable storage and compute resources to users on demand through Internet. Public cloud providers operate large-scale data centers around the world to handle a large number of users request. However, data centers consume an immense amount of electrical energy that can lead to high operating costs and carbon emissions. One of the most common and eļ¬€ective method in order to reduce energy consumption is Dynamic Virtual Machines Consolidation (DVMC) enabled by the virtualization technology. DVMC dynamically consolidates Virtual Machines (VMs) into the minimum number of active servers and then switches the idle servers into a power-saving mode to save energy. However, maintaining the desired level of Quality-of-Service (QoS) between data centers and their users is critical for satisfying usersā€™ expectations concerning performance. Therefore, the main challenge is to minimize the data center energy consumption while maintaining the required QoS. This thesis address this challenge by presenting novel DVMC approaches to reduce the energy consumption of data centers and improve resource utilization under workload independent quality of service constraints. These approaches can be divided into three main categories: heuristic, meta-heuristic and machine learning. Our ļ¬rst contribution is a heuristic algorithm for solving the DVMC problem. The algorithm uses a linear regression-based prediction model to detect over-loaded servers based on the historical utilization data. Then it migrates some VMs from the over-loaded servers to avoid further performance degradations. Moreover, our algorithm consolidates VMs on fewer number of server for energy saving. The second and third contributions are two novel DVMC algorithms based on the Reinforcement Learning (RL) approach. RL is interesting for highly adaptive and autonomous management in dynamic environments. For this reason, we use RL to solve two main sub-problems in VM consolidation. The ļ¬rst sub-problem is the server power mode detection (sleep or active). The second sub-problem is to ļ¬nd an eļ¬€ective solution for server status detection (overloaded or non-overloaded). The fourth contribution of this thesis is an online optimization meta-heuristic algorithm called Ant Colony System-based Placement Optimization (ACS-PO). ACS is a suitable approach for VM consolidation due to the ease of parallelization, that it is close to the optimal solution, and its polynomial worst-case time complexity. The simulation results show that ACS-PO provides substantial improvement over other heuristic algorithms in reducing energy consumption, the number of VM migrations, and performance degradations. Our ļ¬fth contribution is a Hierarchical VM management (HiVM) architecture based on a three-tier data center topology which is very common use in data centers. HiVM has the ability to scale across many thousands of servers with energy eļ¬ƒciency. Our sixth contribution is a Utilization Prediction-aware Best Fit Decreasing (UP-BFD) algorithm. UP-BFD can avoid SLA violations and needless migrations by taking into consideration the current and predicted future resource requirements for allocation, consolidation, and placement of VMs. Finally, the seventh and the last contribution is a novel Self-Adaptive Resource Management System (SARMS) in data centers. To achieve scalability, SARMS uses a hierarchical architecture that is partially inspired from HiVM. Moreover, SARMS provides self-adaptive ability for resource management by dynamically adjusting the utilization thresholds for each server in data centers.Siirretty Doriast

    Modelling of the Western University Campus Electrical Network for Infrastructural Interdependencies in a Disaster Response Network Enables Platform

    Get PDF
    The interdependencies that exist between multiple infrastructures can cause unexpected system behaviour when their component failure occurs due to large disruptions such as earthquake or Tsunami. The complexities of these interdependencies make it very difficult to effectively recover infrastructure because of the several challenges encountered. To overcome these challenges, a research program called Disaster Response Network Enabled Platform (DR-NEP) was initiated. This thesis deals with the modelling of electrical networks in order to study critical infrastructures interdependencies as a part of DR-NEP project. In first module of the thesis, the concept and understanding of interdependencies is presented. For studying the infrastructural interdependencies, three infrastructures are selected at Western campus: electrical power system, steam system and water systems. It is demonstrated that electrical infrastructure is the most significant infrastructure as all other infrastructures are dependent on electrical input. This thesis subsequently presents the development of a detailed model of the electrical power system of Western campus. This model is validated with actual measured data provided by the Western facilities management for different loading conditions and different feeder positions. Such a model has been developed for the first time at Western University. This model can be used not just for studying disaster scenarios but also for planning of future electrical projects and expansion of facilities in the Western campus. The second module of thesis deals with the different disaster scenarios, critical subsystems and the impact of appropriate decision making on the overall working of the Western campus, with a special focus on electrical power systems. The results from the validated electrical model are incorporated into the infrastructural interdependency software (I2Sim). A total of six disaster scenarios are studied; three involving the electrical power systems in collaboration with water and steam systems, and other three involving only the electrical power system. The study of interdependency during disasters is performed to generate a wiser decision making process. The results presented in this thesis are an important addition to the earlier work done in DRNEP project, which only involved three infrastructures: steam, condensate return, and water. In this iv thesis, the information on electrical networks which was earlier missing is provided through the validated electrical power model. It is demonstrated that decisions to reduce electrical power consumption on campus by evacuating campus areas are effective in stabilizing the hospital operations but not in maintaining Western business continuity. A decision to accommodate hospital activities according to power availability appears to be the better choice. The results presented in this thesis will help in a much better manner to pre-plan different preparedness strategies to deal with any future potential emergencies in the Western campus

    ACUTA Journal of Telecommunications in Higher Education

    Get PDF
    In This Issue President\u27s Message From the ACUTA CEO Snapshots: What ls Your Campus Doing in the Cloud? Security in the Cloud Why ls My Head in the Clouds? Unified Communications: Challenge and Opportunity for Education 2015 ResNet lnfographic How Light Can Change the World A Case for Hybrid Cloud Snapshots: What ls Your Campus Doing in the Cloud? Cloud Hurdles Shift from Security to Contracts 201 5 lnstitutional Excellence Award Snapshots: What ls Your Campus Doing in the Cloud? 2015 Awards Honor lndividual

    Energy and Performance Management of Virtual Machines: Provisioning, Placement and Consolidation

    Get PDF
    Cloud computing is a new computing paradigm that offers scalable storage and compute resources to users on demand through Internet. Public cloud providers operate large-scale data centers around the world to handle a large number of users request. However, data centers consume an immense amount of electrical energy that can lead to high operating costs and carbon emissions. One of the most common and effective method in order to reduce energy consumption is Dynamic Virtual Machines Consolidation (DVMC) enabled by the virtualization technology. DVMC dynamically consolidates Virtual Machines (VMs) into the minimum number of active servers and then switches the idle servers into a power-saving mode to save energy. Ho- wever, maintaining the desired level of Quality-of-Service (QoS) between data centers and their users is critical for satisfying usersā€™ expectations con- cerning performance. Therefore, the main challenge is to minimize the data center energy consumption while maintaining the required QoS. This thesis address this challenge by presenting novel DVMC approaches to reduce the energy consumption of data centers and improve resource utili- zation under workload independent quality of service constraints. These ap- proaches can be divided into three main categories: heuristic, meta-heuristic and machine learning. Our first contribution is a heuristic algorithm for solving the DVMC problem. The algorithm uses a linear regression-based prediction model to detect over-loaded servers based on the historical utilization data. Then it migrates some VMs from the over-loaded servers to avoid further performan- ce degradations. Moreover, our algorithm consolidates VMs on fewer number of server for energy saving. The second and third contributions are two novel DVMC algorithms based on the Reinforcement Learning (RL) approach. RL is interesting for highly adaptive and autonomous management in dynamic environments. For this reason, we use RL to solve two main sub-problems in VM consolidation. The first sub-problem is the server power mode detection (sleep or active). The second sub-problem is to find an effective solution for server status detection (overloaded or non-overloaded). The fourth con- tribution of this thesis is an online optimization meta-heuristic algorithm called Ant Colony System-based Placement Optimization (ACS-PO). ACSĀ is a suitable approach for VM consolidation due to the ease of parallelization, that it is close to the optimal solution, and its polynomial worst-case time complexity. The simulation results show that ACS-PO provides substantial improvement over other heuristic algorithms in reducing energy consump- tion, the number of VM migrations, and performance degradations. Our fifth contribution is a Hierarchical VM management (HiVM) archi- tecture based on a three-tier data center topology which is very common use in data centers. HiVM has the ability to scale across many thousands of ser- vers with energy efficiency. Our sixth contribution is a Utilization Prediction- aware Best Fit Decreasing (UP-BFD) algorithm. UP-BFD can avoid SLA violations and needless migrations by taking into consideration the current and predicted future resource requirements for allocation, consolidation, and placement of VMs. Finally, the seventh and the last contribution is a novel Self-Adaptive Resource Management System (SARMS) in data centers. To achieve scala- bility, SARMS uses a hierarchical architecture that is partially inspired from HiVM. Moreover, SARMS provides self-adaptive ability for resource mana- gement by dynamically adjusting the utilization thresholds for each server in data centers.Ā  </div

    Spectrum Sharing, Latency, and Security in 5G Networks with Application to IoT and Smart Grid

    Get PDF
    The surge of mobile devices, such as smartphones, and tables, demands additional capacity. On the other hand, Internet-of-Things (IoT) and smart grid, which connects numerous sensors, devices, and machines require ubiquitous connectivity and data security. Additionally, some use cases, such as automated manufacturing process, automated transportation, and smart grid, require latency as low as 1 ms, and reliability as high as 99.99\%. To enhance throughput and support massive connectivity, sharing of the unlicensed spectrum (3.5 GHz, 5GHz, and mmWave) is a potential solution. On the other hand, to address the latency, drastic changes in the network architecture is required. The fifth generation (5G) cellular networks will embrace the spectrum sharing and network architecture modifications to address the throughput enhancement, massive connectivity, and low latency. To utilize the unlicensed spectrum, we propose a fixed duty cycle based coexistence of LTE and WiFi, in which the duty cycle of LTE transmission can be adjusted based on the amount of data. In the second approach, a multi-arm bandit learning based coexistence of LTE and WiFi has been developed. The duty cycle of transmission and downlink power are adapted through the exploration and exploitation. This approach improves the aggregated capacity by 33\%, along with cell edge and energy efficiency enhancement. We also investigate the performance of LTE and ZigBee coexistence using smart grid as a scenario. In case of low latency, we summarize the existing works into three domains in the context of 5G networks: core, radio and caching networks. Along with this, fundamental constraints for achieving low latency are identified followed by a general overview of exemplary 5G networks. Besides that, a loop-free, low latency and local-decision based routing protocol is derived in the context of smart grid. This approach ensures low latency and reliable data communication for stationary devices. To address data security in wireless communication, we introduce a geo-location based data encryption, along with node authentication by k-nearest neighbor algorithm. In the second approach, node authentication by the support vector machine, along with public-private key management, is proposed. Both approaches ensure data security without increasing the packet overhead compared to the existing approaches
    • ā€¦
    corecore