4 research outputs found
Sustainable Edge Computing: Challenges and Future Directions
An increasing amount of data is being injected into the network from IoT
(Internet of Things) applications. Many of these applications, developed to
improve society's quality of life, are latency-critical and inject large
amounts of data into the network. These requirements of IoT applications
trigger the emergence of Edge computing paradigm. Currently, data centers are
responsible for a global energy use between 2% and 3%. However, this trend is
difficult to maintain, as bringing computing infrastructures closer to the edge
of the network comes with its own set of challenges for energy efficiency. In
this paper, we propose our approach for the sustainability of future computing
infrastructures to provide (i) an energy-efficient and economically viable
deployment, (ii) a fault-tolerant automated operation, and (iii) a
collaborative resource management to improve resource efficiency. We identify
the main limitations of applying Cloud-based approaches close to the data
sources and present the research challenges to Edge sustainability arising from
these constraints. We propose two-phase immersion cooling, formal modeling,
machine learning, and energy-centric federated management as Edge-enabling
technologies. We present our early results towards the sustainability of an
Edge infrastructure to demonstrate the benefits of our approach for future
computing environments and deployments.Comment: 26 pages, 16 figure
The AI gambit — leveraging artificial intelligence to combat climate change: opportunities, challenges, and recommendations
In this article we analyse the role that artificial intelligence (AI) could play, and is playing,
to combat global climate change. We identify two crucial opportunities that AI offers in
this domain: it can help improve and expand current understanding of climate change and
it contribute to combating the climate crisis effectively. However, the development of AI
also raises two sets of problems when considering climate change: the possible
exacerbation of social and ethical challenges already associated with AI, and the
contribution to climate change of the greenhouse gases emitted by training data and
computation-intensive AI systems. We assess the carbon footprint of AI research, and the
factors that influence AI’s greenhouse gas (GHG) emissions in this domain. We find that
the carbon footprint of AI research may be significant and highlight the need for more
evidence concerning the trade-off between the GHG emissions generated by AI research
and the energy and resource efficiency gains that AI can offer. In light of our analysis, we
argue that leveraging the opportunities offered by AI for global climate change whilst
limiting its risks is a gambit which requires responsive, evidence-based and effective
governance to become a winning strategy. We conclude by identifying the European
Union as being especially well-placed to play a leading role in this policy response and
provide 13 recommendations that are designed to identify and harness the opportunities
of AI for combating climate change, while reducing its impact on the environment
Adaptive monitoring and control framework in Application Service Management environment
The economics of data centres and cloud computing services have pushed hardware and software requirements to the limits, leaving only very small performance overhead before systems get into saturation. For Application Service Management–ASM, this carries the growing risk of impacting the execution times of various processes. In order to deliver a stable service at times of great demand for computational power, enterprise data centres and cloud providers must implement fast and robust control mechanisms that are capable of adapting to changing operating conditions while satisfying service–level agreements. In ASM practice, there are normally two methods for dealing with increased load, namely increasing computational power or releasing load. The first approach typically involves allocating additional machines, which must be available, waiting idle, to deal with high demand situations. The second approach is implemented by terminating incoming actions that are less important to new activity demand patterns, throttling, or rescheduling jobs. Although most modern cloud platforms, or operating systems, do not allow adaptive/automatic termination of processes, tasks or actions, it is administrators’ common practice to manually end, or stop, tasks or actions at any level of the system, such as at the level of a node, function, or process, or kill a long session that is executing on a database server. In this context, adaptive control of actions termination remains a significantly
underutilised subject of Application Service Management and deserves further consideration. For example, this approach may be eminently suitable for systems with harsh
execution time Service Level Agreements, such as real–time systems, or systems running
under conditions of hard pressure on power supplies, systems running under variable priority, or constraints set up by the green computing paradigm. Along this line of work,
the thesis investigates the potential of dimension relevance and metrics signals decomposition as methods that would enable more efficient action termination. These methods are integrated in adaptive control emulators and actuators powered by neural networks that are used to adjust the operation of the system to better conditions in environments with established goals seen from both system performance and economics perspectives. The behaviour of the proposed control framework is evaluated using complex load and service agreements scenarios of systems compatible with the requirements of on–premises, elastic compute cloud deployments, server–less computing, and micro–services architectures
Hiding greenhouse gas emissions in the cloud
Data centres account for 1% of total global electricity demand but this may grow to between 15-30% of electricity consumption in some countries by 2030. The majority of this growth is attributed to cloud computing, particularly the larg-est “hyperscale” vendors. IT emissions previously accounted for under the Greenhouse Gas Protocol Scope 1 and Scope 2 move to Scope 3 when outsourced to the cloud. However, the data needed to complete those calculations is not available from cloud vendors. Further, since Scope 3 emissions tend to be reported only voluntarily and the emissions are aggregated into the global emissions reporting by the large cloud vendors, this can result in emissions being hidden when they are moved to the cloud