14,876 research outputs found
Transferable knowledge for Low-cost Decision Making in Cloud Environments
Users of Infrastructure as a Service (IaaS) are increasingly overwhelmed with the wide range of providers and services offered by each
provider. As such, many users select services based on description alone. An emerging alternative is to use a decision support system (DSS), which
typically relies on gaining insights from observational data in order to assist a customer in making decisions regarding optimal deployment of cloud
applications. The primary activity of such systems is the generation of a prediction model (e.g. using machine learning), which requires a significantly
large amount of training data. However, considering the varying architectures of applications, cloud providers, and cloud offerings, this activity is
not sustainable as it incurs additional time and cost to collect data to train the models. We overcome this through developing a Transfer Learning (TL)
approach where knowledge (in the form of a prediction model and associated data set) gained from running an application on a particular IaaS is
transferred in order to substantially reduce the overhead of building new models for the performance of new applications and/or cloud infrastructures.
In this paper, we present our approach and evaluate it through extensive experimentation involving three real world applications over two major public
cloud providers, namely Amazon and Google. Our evaluation shows that our novel two-mode TL scheme increases overall efficiency with a factor of
60% reduction in the time and cost of generating a new prediction model. We test this under a number of cross-application and cross-cloud scenario
Towards a scalable and transferable approach to map deprived areas using Sentinel-2 images and machine learning
African cities are growing rapidly and more than half of their populations live in deprived areas. Local stakeholders urgently need accurate, granular, and routine maps to plan, upgrade, and monitor dynamic neighborhood-level changes. Satellite imagery provides a promising solution for consistent, accurate high-resolution maps globally. However, most studies use very high spatial resolution images, which often cover only small areas and are cost prohibitive. Additionally, model transferability to new cities remains uncertain. This study proposes a scalable and transferable approach to routinely map deprived areas using free, Sentinel-2 images. The models were trained and tested on three cities: Lagos (Nigeria), Accra (Ghana), and Nairobi (Kenya). Contextual features were extracted at 10 m spatial resolution and aggregated to a 100 m grid. Four machine learning algorithms were evaluated, including multi-layer perceptron (MLP), Random Forest, Logistic Regression, and Extreme Gradient Boosting (XGBoost). The scalability of model performance was examined using patches of the different deprived types identified through visual image interpretation. The study also tested the ability of models to map deprived areas of different types across cities. Results indicate that deprived areas have heterogeneous local characteristics that affect large area mapping. The top 25 features for each city show that models are sensitive to the spatial structures of deprived area types. While models performed well on individual cities with XGBoost and MLP achieving an F1 scores of over 80%, the generalized model proves to be more beneficial for modeling multiple cities. This approach offers a promising solution for scaling routine, accurate maps of deprived areas to hundreds of cities that currently lack any such map, supporting local stakeholders to plan, implement, and monitor geotargeted interventions
Recommended from our members
Quality Assessment for E-learning: a Benchmarking Approach (Third edition)
The primary purpose of this manual is to provide a set of benchmarks, quality criteria and notes for guidance against which e-learning programmes and their support systems may be judged. The manual should therefore be seen primarily as a reference tool for the assessment or review of e-learning programmes and the systems which support them.
However, the manual should also prove to be useful to staff in institutions concerned with the design, development, teaching, assessment and support of e-learning programmes. It is hoped that course developers, teachers and other stakeholders will see the manual as a useful development and/or improvement tool for incorporation in their own institutional systems of monitoring, evaluation and enhancement
Understanding collaboration in volunteer computing systems
Volunteer computing is a paradigm in which devices participating in a distributed environment share part of their resources to help others perform their activities. The effectiveness of this computing paradigm depends on the collaboration attitude adopted by the participating devices. Unfortunately for software designers it is not clear how to contribute with local resources to the shared environment without compromising resources that could then be required by the contributors. Therefore, many designers adopt a conservative position when defining the collaboration strategy to be embedded in volunteer computing applications. This position produces an underutilization of the devices’ local resources and reduces the effectiveness of these solutions. This article presents a study that helps designers understand the impact of adopting a particular collaboration attitude to contribute with local resources to the distributed shared environment. The study considers five collaboration strategies, which are analyzed in computing environments with both, abundance and scarcity of resources. The obtained results indicate that collaboration strategies based on effort-based incentives work better than those using contribution-based incentives. These results also show that the use of effort-based incentives does not jeopardize the availability of local resources for the local needs.Peer ReviewedPostprint (published version
A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing
Edge computing is promoted to meet increasing performance needs of
data-driven services using computational and storage resources close to the end
devices, at the edge of the current network. To achieve higher performance in
this new paradigm one has to consider how to combine the efficiency of resource
usage at all three layers of architecture: end devices, edge devices, and the
cloud. While cloud capacity is elastically extendable, end devices and edge
devices are to various degrees resource-constrained. Hence, an efficient
resource management is essential to make edge computing a reality. In this
work, we first present terminology and architectures to characterize current
works within the field of edge computing. Then, we review a wide range of
recent articles and categorize relevant aspects in terms of 4 perspectives:
resource type, resource management objective, resource location, and resource
use. This taxonomy and the ensuing analysis is used to identify some gaps in
the existing research. Among several research gaps, we found that research is
less prevalent on data, storage, and energy as a resource, and less extensive
towards the estimation, discovery and sharing objectives. As for resource
types, the most well-studied resources are computation and communication
resources. Our analysis shows that resource management at the edge requires a
deeper understanding of how methods applied at different levels and geared
towards different resource types interact. Specifically, the impact of mobility
and collaboration schemes requiring incentives are expected to be different in
edge architectures compared to the classic cloud solutions. Finally, we find
that fewer works are dedicated to the study of non-functional properties or to
quantifying the footprint of resource management techniques, including
edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless
Communications and Mobile Computing journa
- …