95 research outputs found

    Dynamic Shift from Cloud Computing to Industry 4.0: Eco-Friendly Choice or Climate Change Threat

    Get PDF
    Cloud computing utilizes thousands of Cloud Data Centres (CDC) and fulfils the demand of end-users dynamically using new technologies and paradigms such as Industry 4.0 and Internet of Things (IoT). With the emergence of Industry 4.0, the quality of cloud service has increased; however, CDC consumes a large amount of energy and produces a huge quantity of carbon footprint, which is one of the major drivers of climate change. This chapter discusses the impacts of cloud developments on climate and quantifies the carbon footprint of cloud computing in a warming world. Further, the dynamic transition from cloud computing to Industry 4.0 is discussed from an eco-friendly/climate change threat perspective. Finally, open research challenges and opportunities for prospective researchers are explored

    START: Straggler Prediction and Mitigation for Cloud Computing Environments using Encoder LSTM Networks

    Get PDF
    A common performance problem in large-scale cloud systems is dealing with straggler tasks that are slow running instances which increase the overall response time. Such tasks impact the system's QoS and the SLA. There is a need for automatic straggler detection and mitigation mechanisms that execute jobs without violating the SLA. Prior work typically builds reactive models that focus first on detection and then mitigation of straggler tasks, which leads to delays. Other works use prediction based proactive mechanisms, but ignore volatile task characteristics. We propose a Straggler Prediction and Mitigation Technique (START) that is able to predict which tasks might be stragglers and dynamically adapt scheduling to achieve lower response times. START analyzes all tasks and hosts based on compute and network resource consumption using an Encoder LSTM network to predict and mitigate expected straggler tasks. This reduces the SLA violation rate and execution time without compromising QoS. Specifically, we use the CloudSim toolkit to simulate START and compare it with IGRU-SD, SGC, Dolly, GRASS, NearestFit and Wrangler in terms of QoS parameters. Experiments show that START reduces execution time, resource contention, energy and SLA violations by 13%, 11%, 16%, 19%, compared to the state-of-the-art

    iThermoFog: IoT‐Fog based Automatic Thermal Profile Creation for Cloud Data Centers using Artificial Intelligence Techniques

    Get PDF
    Preventing failures in Cloud Data Centers (CDCs) due to high temperatures is a key challenge. Such centers have so many servers that it is very difficult to efficiently keep their temperature under control. To help address this issue, we propose an artificial intelligence (AI) based automatic scheduling method that creates a thermal profile of CDC nodes using an integrated Internet of Things (IoT) and Fog computing environment called iThermoFog .We use a Gaussian Mixture Model to approximate the thermal characteristics of the servers which are used to predict and schedule tasks to minimize the average CDC temperature. Through empirical evaluation on an iFogSim and ThermoSim based testbed and IoT based smart home application, we show that iThermoFog outperforms the current state‐of‐the‐art thermal‐aware scheduling method. Specifically, iThermoFog reduces mean square temperatures by 13.5%, while simultaneously improving energy consumption, execution time, scheduling time and bandwidth usage

    Next Generation Technologies for Smart Healthcare: Challenges, Vision, Model, Trends and Future Directions

    Get PDF
    Modern industry employs technologies for automation that may include Internet of Things (IoT), Cloud and/or Fog Computing, 5G as well as Artificial Intelligence (AI), Machine Learning (ML), or Blockchain. Currently, a part of research for the new industrial era is in the direction of improving healthcare services. This work throws light on some of the major challenges in providing affordable, efficient, secure and reliable healthcare from the viewpoint of computer and medical sciences. We describe a vision of how a holistic model can fulfill the growing demands of healthcare industry, and explain a conceptual model that can provide a complete solution for these increasing demands. In our model, we elucidate the components and their interaction at different levels, leveraging state‐of‐the art technologies in IoT, Fog computing, AI, ML and Blockchain. We finally describe current trends in this field and propose future directions to explore emerging paradigms and technologies on evolution of healthcare leveraging next generation computing systems

    ThermoSim: Deep Learning based Framework for Modeling and Simulation of Thermal-aware Resource Management for Cloud Computing Environments

    Get PDF
    Current cloud computing frameworks host millions of physical servers that utilize cloud computing resources in the form of different virtual machines. Cloud Data Center (CDC) infrastructures require significant amounts of energy to deliver large scale computational services. Moreover, computing nodes generate large volumes of heat, requiring cooling units in turn to eliminate the effect of this heat. Thus, overall energy consumption of the CDC increases tremendously for servers as well as for cooling units. However, current workload allocation policies do not take into account effect on temperature and it is challenging to simulate the thermal behavior of CDCs. There is a need for a thermal-aware framework to simulate and model the behavior of nodes and measure the important performance parameters which can be affected by its temperature. In this paper, we propose a lightweight framework, ThermoSim, for modeling and simulation of thermal-aware resource management for cloud computing environments. This work presents a Recurrent Neural Network based deep learning temperature predictor for CDCs which is utilized by ThermoSim for lightweight resource management in constrained cloud environments. ThermoSim extends the CloudSim toolkit helping to analyze the performance of various key parameters such as energy consumption, service level agreement violation rate, number of virtual machine migrations and temperature during the management of cloud resources for execution of workloads. Further, different energy-aware and thermal-aware resource management techniques are tested using the proposed ThermoSim framework in order to validate it against the existing framework (Thas). The experimental results demonstrate the proposed framework is capable of modeling and simulating the thermal behavior of a CDC and ThermoSim framework is better than Thas in terms of energy consumption, cost, time, memory usage and prediction accuracy

    Production of phi mesons at mid-rapidity in sqrt(s_NN) = 200 GeV Au+Au collisions at RHIC

    Get PDF
    We present the first results of meson production in the K^+K^- decay channel from Au+Au collisions at sqrt(s_NN) = 200 GeV as measured at mid-rapidity by the PHENIX detector at RHIC. Precision resonance centroid and width values are extracted as a function of collision centrality. No significant variation from the PDG accepted values is observed. The transverse mass spectra are fitted with a linear exponential function for which the derived inverse slope parameter is seen to be constant as a function of centrality. These data are also fitted by a hydrodynamic model with the result that the freeze-out temperature and the expansion velocity values are consistent with the values previously derived from fitting single hadron inclusive data. As a function of transverse momentum the collisions scaled peripheral.to.central yield ratio RCP for the is comparable to that of pions rather than that of protons. This result lends support to theoretical models which distinguish between baryons and mesons instead of particle mass for explaining the anomalous proton yield.Comment: 326 authors, 24 pages text, 23 figures, 6 tables, RevTeX 4. To be submitted to Physical Review C as a regular article. Plain text data tables for the points plotted in figures for this and previous PHENIX publications are (or will be) publicly available at http://www.phenix.bnl.gov/papers.htm

    Prevalence of intestinal parasitic infections among HIV patients in Benin City, Nigeria

    Get PDF
    This study was carried out to determine the presence of intestinal parasites and their correlation with CD4+ T-cell counts and demographics among human immunodeficiency virus (HIV)-positive patients in Benin City, Nigeria. Stool specimens from 2,000 HIV-positive patients and 500 controls (HIV-negative individuals) were examined for ova, cysts, or parasites, using standard procedures. In addition, patient's blood samples were analyzed for CD4 counts by flow cytometry. An overall prevalence rate of 15.3% was observed among HIV-positive patients while 6.2% was noted among non-HIV subjects. HIV status was a significant (P<0.0001) risk factor for acquiring intestinal parasitic infections. Male gender, CD4 count <200cell/µl, and diarrhea were significantly associated with an increased prevalence of intestinal parasitic infections among HIV-positive patients. The level of education, occupation, and source of water among HIV patients significantly (P<0.0001) affected the prevalence of intestinal parasitic infections. Ascaris lumbricoides was the most predominant parasite in both HIV-positive patients and controls. A CD4 count <200 cells/µl was significantly associated with only Isospora belli and Cryptosporidium infections. The presence of pathogenic intestinal parasites such as A. lumbricoides, hookworm, Giardia intestinalis, Entamoeba histolytica, Trichuris trichiura, and Taenia species among HIV-infected persons should not be neglected. Cryptosporidium species and I. belli were the opportunistic parasites observed in this study. Routine screening for intestinal parasites in HIV-positive patients is advocated

    J/psi production from proton-proton collisions at sqrt(s) = 200 GeV

    Get PDF
    J/psi production has been measured in proton-proton collisions at sqrt(s)= 200 GeV over a wide rapidity and transverse momentum range by the PHENIX experiment at RHIC. Distributions of the rapidity and transverse momentum, along with measurements of the mean transverse momentum and total production cross section are presented and compared to available theoretical calculations. The total J/psi cross section is 3.99 +/- 0.61(stat) +/- 0.58(sys) +/- 0.40(abs) micro barns. The mean transverse momentum is 1.80 +/- 0.23(stat) +/- 0.16(sys) GeV/c.Comment: 326 authors, 6 pages text, 4 figures, 1 table, RevTeX 4. To be submitted to PRL. Plain text data tables for the points plotted in figures for this and previous PHENIX publications are (or will be) publicly available at http://www.phenix.bnl.gov/papers.htm

    Measurement of Single Electron Event Anisotropy in Au+Au Collisions at sqrt(s_NN) = 200 GeV

    Get PDF
    The transverse momentum dependence of the azimuthal anisotropy parameter v_2, the second harmonic of the azimuthal distribution, for electrons at mid-rapidity (|eta| < 0.35) has been measured with the PHENIX detector in Au+Au collisions at sqrt(s_NN) = 200 GeV. The measurement was made with respect to the reaction plane defined at high rapidities (|eta| = 3.1 -- 3.9). From the result we have measured the v_2 of electrons from heavy flavor decay after subtraction of the v_2 of electrons from other sources such as photon conversions and Dalitz decay from light neutral mesons. We observe a non-zero single electron v_2 with a 90% confidence level in the intermediate p_T region.Comment: 330 authors, 11 pages text, RevTeX4, 9 figures, 1 tables. Submitted to Physical Review C. Plain text data tables for the points plotted in figures for this and previous PHENIX publications are (or will be) publicly available at http://www.phenix.bnl.gov/papers.htm

    Centrality Dependence of Charm Production from Single Electrons in Au+Au Collisions at sqrt(s_NN) = 200 GeV

    Get PDF
    The PHENIX experiment has measured mid-rapidity transverse momentum spectra (0.4 < p_T < 4.0 GeV/c) of single electrons as a function of centrality in Au+Au collisions at sqrt(s_NN) = 200 GeV. Contributions to the raw spectra from photon conversions and Dalitz decays of light neutral mesons are measured by introducing a thin (1.7% X_0) converter into the PHENIX acceptance and are statistically removed. The subtracted ``non-photonic'' electron spectra are primarily due to the semi-leptonic decays of hadrons containing heavy quarks (charm and bottom). For all centralities, charm production is found to scale with the nuclear overlap function, T_AA. For minimum-bias collisions the charm cross section per binary collision is N_cc^bar/T_AA = 622 +/- 57 (stat.) +/- 160 (sys.) microbarns.Comment: 326 authors, 4 pages text, 3 figures, 1 table, RevTeX 4. To be submitted to Physical Review Letters. Plain text data tables for the points plotted in figures for this and previous PHENIX publications are (or will be) publicly available at http://www.phenix.bnl.gov/papers.htm
    corecore