64 research outputs found
Decentralized Machine Learning for Intelligent Health Care Systems on the Computing Continuum
The introduction of electronic personal health records (EHR) enables
nationwide information exchange and curation among different health care
systems. However, the current EHR systems do not provide transparent means for
diagnosis support, medical research or can utilize the omnipresent data
produced by the personal medical devices. Besides, the EHR systems are
centrally orchestrated, which could potentially lead to a single point of
failure. Therefore, in this article, we explore novel approaches for
decentralizing machine learning over distributed ledgers to create intelligent
EHR systems that can utilize information from personal medical devices for
improved knowledge extraction. Consequently, we proposed and evaluated a
conceptual EHR to enable anonymous predictive analysis across multiple medical
institutions. The evaluation results indicate that the decentralized EHR can be
deployed over the computing continuum with reduced machine learning time of up
to 60% and consensus latency of below 8 seconds
A Two-Stage Multi-Objective Optimization of Erasure Coding in Overlay Networks
In the recent years, overlay networks have emerged as a crucial platform for deployment of various distributed applications. Many of these applications rely on data redundancy techniques, such as erasure coding, to achieve higher fault tolerance. However, erasure coding applied in large scale overlay networks entails various overheads in terms of storage, latency and data rebuilding costs. These overheads are largely attributed to the selected erasure coding scheme and the encoded chunk placement in the overlay network. This paper explores a multi-objective optimization approach for identifying appropriate erasure coding schemes and encoded chunk placement in overlay networks. The uniqueness of our approach lies in the consideration of multiple erasure coding objectives such as encoding rate and redundancy factor, with overlay network performance characteristics like storage consumption, latency and system reliability. Our approach enables a variety of tradeoff solutions with respect to these objectives to be identified in the form of a Pareto front. To solve this problem, we propose a novel two stage multiobjective evolutionary algorithm, where the first stage determines the optimal set of encoding schemes, while the second stage optimizes placement of the corresponding encoded data chunks in overlay networks of varying sizes. We study the performance of our method by generating and analyzing the Pareto optimal sets of tradeoff solutions. Experimental results demonstrate that the Pareto optimal set produced by our multi-objective approach includes and even dominates the chunk placements delivered by a related state-of-the-art weighted sum method
VM Image Repository and Distribution Models for Federated Clouds: State of the Art, Possible Directions and Open Issues
The emerging trend of Federated Cloud models enlist virtualization as a significant concept to offer a large scale distributed Infrastructure as a Service collaborative paradigm to end users. Virtualization leverage Virtual Machines (VM) instantiated from user specific templates labelled as VM Images (VMI). To this extent, the rapid provisioning of VMs with varying user requests ensuring Quality of Service (QoS) across multiple cloud providers largely depends upon the image repository architecture and distribution policies. We discuss the possible state-of-art in VMI storage repository and distribution mechanisms for efficient VM provisioning in federated clouds. In addition, we present and compare various representative systems in this realm. Furthermore, we define a design space, identify current limitations, challenges and open trends for VMI repositories and distribution techniques within federated infrastructure
Modular router architecture for high-performance interconnection networks
Usmjerivači (ruteri) velikog kapaciteta su temeljni moduli mreža za široku međupovezanost sustava u računalnim sustavima velikog kapaciteta. Kolektivnom interakcijom oni osiguravaju pouzdanu komunikaciju između računalnih čvorova i upravljaju komunikacijskim protokom podataka. Postupak razvoja specijalizirane arhitekture usmjerivača vrlo je složen i zahtijeva razmatranje mnogih čimbenika. Arhitektura usmjerivača velikog kapaciteta uvelike ovisi o mehanizmu za reguliranje protoka budući da on upravlja načinom na koji se paketi prenose kroz mrežu. U radu se predlaže nova visoko učinkovita arhitektura usmjerivača "Step-Back-On-Blocking".High performance routers are fundamental building blocks of the system wide interconnection networks for high performance computing systems. Through collective interaction they provide reliable communication between the computing nodes and manage the communicational dataflow. The development process of specialized router architecture has high complexity and it requires many factors to be considered. The architecture of the high-performance routers is highly dependent on the flow control mechanism, as it dictates the way in which the packets are transferred through the network. In this paper novel high-performance "Step-Back-On-Blocking" router architecture has been proposed
Scheduling of Distributed Applications on the Computing Continuum: A Survey
The demand for distributed applications has significantly increased over the
past decade, with improvements in machine learning techniques fueling this
growth. These applications predominantly utilize Cloud data centers for
high-performance computing and Fog and Edge devices for low-latency
communication for small-size machine learning model training and inference. The
challenge of executing applications with different requirements on
heterogeneous devices requires effective methods for solving NP-hard resource
allocation and application scheduling problems. The state-of-the-art techniques
primarily investigate conflicting objectives, such as the completion time,
energy consumption, and economic cost of application execution on the Cloud,
Fog, and Edge computing infrastructure. Therefore, in this work, we review
these research works considering their objectives, methods, and evaluation
tools. Based on the review, we provide a discussion on the scheduling methods
in the Computing Continuum.Comment: 7 pages, 3 figures, 3 table
Resource Management Optimization in Multi-Processor Platforms
Proceedings of: Third International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2016). Sofia (Bulgaria), October, 6-7, 2016.The modern high-performance computing systems (HPCS) are composed of hundreds of thousand computational nodes. An
effective resource allocation in HPCS is a subject for many scientific research investigations. Many programming models for
effective resources allocation have been proposed. The main purpose of those models is to increase the parallel performance
of the HPCS. This paper investigates the efficiency of parallel algorithm for resource management optimization based on
Artificial Bee Colony (ABC) metaheuristic while solving a package of NP-complete problems on multi-processor platform.In
order to achieve minimal parallelization overhead in each cluster node, a multi-level hybrid programming model is proposed that
combines coarse-grain and fine-grain parallelism. Coarse-grain parallelism is achieved through domain decomposition by message
passing among computational nodes using Message Passing Interface (MPI) and fine-grain parallelism is obtained by loop-level
parallelism inside each computation node by compiler-based thread parallelization via Intel TBB. Parallel communications
profiling is made and parallel performance parameters are evaluated on the basis of experimental results
- …
