305 research outputs found
Recommended from our members
FABRIC: A National-Scale Programmable Experimental Network Infrastructure
FABRIC is a unique national research infrastructure to enable cutting-edge and exploratory research at-scale in networking, cybersecurity, distributed computing and storage systems, machine learning, and science applications. It is an everywhere-programmable nationwide instrument comprised of novel extensible network elements equipped with large amounts of compute and storage, interconnected by high speed, dedicated optical links. It will connect a number of specialized testbeds for cloud research (NSF Cloud testbeds CloudLab and Chameleon), for research beyond 5G technologies (Platforms for Advanced Wireless Research or PAWR), as well as production high-performance computing facilities and science instruments to create a rich fabric for a wide variety of experimental activities
An inter-cloud architecture for future internet infrastructures
In latest years, the concept of interconnecting clouds to allow common service coordination has gained significant attention mainly because of the increasing utilization of cloud resources from Internet users. An efficient common management between different clouds is essential benefit, like boundless elasticity and scalability. Yet, issues related with different standards led to interoperability problems. For this reason, the definition of the open cloud-computing interface defines a set of open community-lead specifications along with a flexible API to build cloud systems. Today, there are cloud systems like OpenStack, OpenNebula, Amazon Web Services and VMWare VCloud that expose APIs for inter-cloud communication. In this work we aim to explore an inter-cloud model by creating a new cloud platform service to act as a mediator among OpenStack, FI-WARE datacenter resource management and Amazon Web Service cloud architectures, therefore to orchestrate communication of various cloud environments. The model is based on the FI-WARE and will be offered as a reusable enabler with an open specification to allow interoperable service coordination
Multi-FedLS: a Framework for Cross-Silo Federated Learning Applications on Multi-Cloud Environments
Federated Learning (FL) is a distributed Machine Learning (ML) technique that
can benefit from cloud environments while preserving data privacy. We propose
Multi-FedLS, a framework that manages multi-cloud resources, reducing execution
time and financial costs of Cross-Silo Federated Learning applications by using
preemptible VMs, cheaper than on-demand ones but that can be revoked at any
time. Our framework encloses four modules: Pre-Scheduling, Initial Mapping,
Fault Tolerance, and Dynamic Scheduler. This paper extends our previous work
\cite{brum2022sbac} by formally describing the Multi-FedLS resource manager
framework and its modules. Experiments were conducted with three Cross-Silo FL
applications on CloudLab and a proof-of-concept confirms that Multi-FedLS can
be executed on a multi-cloud composed by AWS and GCP, two commercial cloud
providers. Results show that the problem of executing Cross-Silo FL
applications in multi-cloud environments with preemptible VMs can be
efficiently resolved using a mathematical formulation, fault tolerance
techniques, and a simple heuristic to choose a new VM in case of revocation.Comment: In review by Journal of Parallel and Distributed Computin
Hadoop Performance Analysis Model with Deep Data Locality
Background: Hadoop has become the base framework on the big data system via the simple concept that moving computation is cheaper than moving data. Hadoop increases a data locality in the Hadoop Distributed File System (HDFS) to improve the performance of the system. The network traffic among nodes in the big data system is reduced by increasing a data-local on the machine. Traditional research increased the data-local on one of the MapReduce stages to increase the Hadoop performance. However, there is currently no mathematical performance model for the data locality on the Hadoop. Methods: This study made the Hadoop performance analysis model with data locality for analyzing the entire process of MapReduce. In this paper, the data locality concept on the map stage and shuffle stage was explained. Also, this research showed how to apply the Hadoop performance analysis model to increase the performance of the Hadoop system by making the deep data locality. Results: This research proved the deep data locality for increasing performance of Hadoop via three tests, such as, a simulation base test, a cloud test and a physical test. According to the test, the authors improved the Hadoop system by over 34% by using the deep data locality. Conclusions: The deep data locality improved the Hadoop performance by reducing the data movement in HDFS
- âŠ