132 research outputs found
Design and Development of an Energy Efficient Multimedia Cloud Data Center with Minimal SLA Violation
Multimedia computing (MC) is rising as a nascent computing paradigm to process multimedia applications and provide efficient multimedia cloud services with optimal Quality of Service (QoS) to the multimedia cloud users. But, the growing popularity of MC is affecting the climate. Because multimedia cloud data centers consume an enormous amount of energy to provide services, it harms the environment due to carbon dioxide emissions. Virtual machine (VM) migration can effectively address this issue; it reduces the energy consumption of multimedia cloud data centers. Due to the reduction of Energy Consumption (EC), the Service Level Agreement violation (SLAV) may increase. An efficient VM selection plays a crucial role in maintaining the stability between EC and SLAV. This work highlights a novel VM selection policy based on identifying the Maximum value among the differences of the Sum of Squares Utilization Rate (MdSSUR) parameter to reduce the EC of multimedia cloud data centers with minimal SLAV. The proposed MdSSUR VM selection policy has been evaluated using real workload traces in CloudSim. The simulation result of the proposed MdSSUR VM selection policy demonstrates the rate of improvements of the EC, the number of VM migrations, and the SLAV by 28.37%, 89.47%, and 79.14%, respectively
Energy-Efficient Load Balancing Algorithm for Workflow Scheduling in Cloud Data Centers Using Queuing and Thresholds
Cloud computing is a rapidly growing technology that has been implemented in various fields in recent years, such as business, research, industry, and computing. Cloud computing provides different services over the internet, thus eliminating the need for personalized hardware and other resources. Cloud computing environments face some challenges in terms of resource utilization, energy efficiency, heterogeneous resources, etc. Tasks scheduling and virtual machines (VMs) are used as consolidation techniques in order to tackle these issues. Tasks scheduling has been extensively studied in the literature. The problem has been studied with different parameters and objectives. In this article, we address the problem of energy consumption and efficient resource utilization in virtualized cloud data centers. The proposed algorithm is based on task classification and thresholds for efficient scheduling and better resource utilization. In the first phase, workflow tasks are pre-processed to avoid bottlenecks by placing tasks with more dependencies and long execution times in separate queues. In the next step, tasks are classified based on the intensities of the required resources. Finally, Particle Swarm Optimization (PSO) is used to select the best schedules. Experiments were performed to validate the proposed technique. Comparative results obtained on benchmark datasets are presented. The results show the effectiveness of the proposed algorithm over that of the other algorithms to which it was compared in terms of energy consumption, makespan, and load balancing
Recommended from our members
Cognitive-Aware Network Virtualization Hypervisor for Efficient Resource Provisioning in Software Defined Cloud Networks
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonIntegration of different technologies forms an integral part of modern network engineering and 5G technology deployment. Although Software Defined Networking (SDN) and Network Functions Virtualization (NFV) function well independently, integrating these two technologies present the cooperate advantages to service providers and service users. Operations of cloud computing technologies have been enhanced with the advent of SDN
and NFV for efficient solutions deployment and infrastructure management in Software Defined Cloud Datacentre Networks (SDCDCN) where dynamic controllability is indispensable for elastic service provision. The provisioning of joint compute and network resources enabled by SDCN is essential to enforce reasonable Service Level Agreements (SLAs) stating the Quality of Service (QoS) while saving energy consumption and resource wastage. This thesis presents a Cognitive- Aware Network virtualization Hypervisor which was developed from merging the programmable dynamic network control attributes of SDN and the network slicing attributes of NFV to provision joint compute and network resources in SDCDCN for QoS fulfilment and energy efficiency. It focuses on the techniques for allocating Virtual Network Requests on physical hosts and switches considering SLA, QoS, and energy efficiency aspects. The thesis advances the state-of the-art with the following key contributions: A modelling and simulation environment for Software Defined Cloud Datacentre Networks abstracting functionalities and behaviours of virtual and physical network resources. The second is a
novel dynamic overbooking algorithm for energy efficiency and SLA enforcement with the migration of virtual machines and network flows. Finally, a performance-aware intelligent overbooking for predicting network resource usage and performance for the next defined time interval considering multiple performance indexes
High-Performance Modelling and Simulation for Big Data Applications
This open access book was prepared as a Final Publication of the COST Action IC1406 âHigh-Performance Modelling and Simulation for Big Data Applications (cHiPSet)â project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
High-Performance Modelling and Simulation for Big Data Applications
This open access book was prepared as a Final Publication of the COST Action IC1406 âHigh-Performance Modelling and Simulation for Big Data Applications (cHiPSet)â project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
Continual federated learning for network anomaly detection in 5G Open-RAN
Abstract. This dissertation offers a unique federated continual learning setup for anomaly detection in the fast growing 5G Open Radio Access Network (O-RAN) environment. Conventional AI techniques frequently fall short of meeting the security automation needs of 5G networks, owing to their outstanding latency, dependability, and bandwidth demands. As a result, the thesis provides an anomaly detection system that does not only use federated learning (FL) to solve inherent privacy problems and resource constraints but also incorporates replay buffer concept in the training phase of the model to eradicate catastrophic forgetting. To allow the intended federated learning architecture, anomaly detectors are incorporated into the Near-real time RIC, while aggregation servers are installed within the Non-real time RIC. The configuration was carefully tested using the 5G NIDD Dataset, revealing a considerable boost in detection accuracy by reaching close to 99% for almost all datasets after including the continual learning process. The thesis also investigates the notion of transfer learning, in which pre-trained local models are evaluated against a hybrid Application layer DDoS dataset that includes benign samples from the CICIDS 2017 dataset and attack flows generated in proprietary SDN environment. The captured results show almost over 99% of accuracy, confirming the suggested systemâs efficacy and flexibility. The study represents a significant step forward in the development of a more secure, efficient, and privacy-protecting 5G network architecture
Data Mining
The availability of big data due to computerization and automation has generated an urgent need for new techniques to analyze and convert big data into useful information and knowledge. Data mining is a promising and leading-edge technology for mining large volumes of data, looking for hidden information, and aiding knowledge discovery. It can be used for characterization, classification, discrimination, anomaly detection, association, clustering, trend or evolution prediction, and much more in fields such as science, medicine, economics, engineering, computers, and even business analytics. This book presents basic concepts, ideas, and research in data mining
Recommended from our members
Digital phenotyping through multimodal, unobtrusive sensing
The growing adoption of multimodal wearable and mobile devices, such as smartphones and wrist-worn watches has generated an increase in the collection of physiological and behavioural data at scale. This digital phenotyping data enables researchers to make inferences regarding usersâ physical and mental health at scale, for the first time. However, translating this data into actionable insights requires computational approaches that turn unlabelled, multimodal time-series sensor data into validated measures that can be interpreted at scale.
This thesis describes the derivation of novel computational methods that leverage digital phenotyping data from wearable devices in large-scale populations to infer physical behaviours. These methods combine insights from signal processing, data mining and machine learning alongside domain knowledge in physical activity and sleep epidemiology. First, the inference of sleeping windows in free-living conditions through a heart rate sensing approach is explored. This algorithm is particularly valuable in the absence of ground truth or sleep diaries given its simplicity, adaptability and capacity for personalization. I then explore multistage sleep classification through combined movement and cardiac wearable sensing and machine learning. Further, I demonstrate that postural changes detected through wrist accelerometers can inform habitual behaviours and are valuable complements to traditional, intensity-based physical activity metrics. I then leverage the concomitant responses of heart rate to physical activity that can be captured through multimodal wearable sensors through a self-supervised training task. The resulting embeddings from this task are shown to be useful for the downstream classification of demographic factors, BMI, energy expenditure and cardiorespiratory fitness. Finally, I describe a deep learning model for the adaptive inference of cardiorespiratory fitness (VO2max) using wearable data in free living conditions. I demonstrate the robustness of the model in a large UK population and show the modelsâ adaptability by evaluating its performance in a subset of the population with repeated measures ~6 years after the original recordings.
Together, this work increases the potential of multimodal wearable and mobile sensors for physical activity and behavioural inferences in population studies. In particular, this thesis showcases the potential of using wearable devices to make valuable physical activity, sleep and fitness inferences in large cohort studies. Given the nature of the data collected and the fact that most of this data is currently generated by commercial providers and not research institutes, laying the foundations for responsible data governance and ethical use of these technologies will be critical to building trust and enabling the development of the field of digital phenotyping.I was funded by GlaxoSmithKline and the Engineering and Physical Sciences Research Council. I was also supported by the Alan Turing Institute through their Enrichment Scheme
Designing Data Spaces
This open access book provides a comprehensive view on data ecosystems and platform economics from methodical and technological foundations up to reports from practical implementations and applications in various industries. To this end, the book is structured in four parts: Part I âFoundations and Contextsâ provides a general overview about building, running, and governing data spaces and an introduction to the IDS and GAIA-X projects. Part II âData Space Technologiesâ subsequently details various implementation aspects of IDS and GAIA-X, including eg data usage control, the usage of blockchain technologies, or semantic data integration and interoperability. Next, Part III describes various âUse Cases and Data Ecosystemsâ from various application areas such as agriculture, healthcare, industry, energy, and mobility. Part IV eventually offers an overview of several âSolutions and Applicationsâ, eg including products and experiences from companies like Google, SAP, Huawei, T-Systems, Innopay and many more. Overall, the book provides professionals in industry with an encompassing overview of the technological and economic aspects of data spaces, based on the International Data Spaces and Gaia-X initiatives. It presents implementations and business cases and gives an outlook to future developments. In doing so, it aims at proliferating the vision of a social data market economy based on data spaces which embrace trust and data sovereignty
- âŠ