3,578 research outputs found
A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing
Data Grids have been adopted as the platform for scientific communities that
need to share, access, transport, process and manage large data collections
distributed worldwide. They combine high-end computing technologies with
high-performance networking and wide-area storage management techniques. In
this paper, we discuss the key concepts behind Data Grids and compare them with
other data sharing and distribution paradigms such as content delivery
networks, peer-to-peer networks and distributed databases. We then provide
comprehensive taxonomies that cover various aspects of architecture, data
transportation, data replication and resource allocation and scheduling.
Finally, we map the proposed taxonomy to various Data Grid systems not only to
validate the taxonomy but also to identify areas for future exploration.
Through this taxonomy, we aim to categorise existing systems to better
understand their goals and their methodology. This would help evaluate their
applicability for solving similar problems. This taxonomy also provides a "gap
analysis" of this area through which researchers can potentially identify new
issues for investigation. Finally, we hope that the proposed taxonomy and
mapping also helps to provide an easy way for new practitioners to understand
this complex area of research.Comment: 46 pages, 16 figures, Technical Repor
Recommended from our members
Towards an aspect weaving BPEL engine
This position paper proposes the use of dynamic aspects and
the visitor design pattern to obtain a highly configurable and
extensible BPEL engine. Using these two techniques, the
core of this infrastructural software can be customised to
meet new requirements and add features such as debugging,
execution monitoring, or changing to another Web Service
selection policy. Additionally, it can easily be extended to
cope with customer-specific BPEL extensions. We propose
the use of dynamic aspects not only on the engine itself
but also on the workflow in order to tackle the problems of
Web Service hot deployment and hot fixes to long running
processes. In this way, composing aWeb Service "on-the-fly"
means weaving its choreography interface into the workflow
Reducing execution time in faaS cloud platforms
Dissertion to obtain the Master’s degree in Informatics and Computers EngineeringO aumento de popularidade do processamento e execução de código na Cloud levou ao despertar do interesse pela Functions Framework da Google, sendo o objetivo principal identificar pontos de possível melhoria na plataforma e a sua adaptação de forma a responder à necessidade identificada, tal como a obtenção e análise de resultados com o objetivo de validar a progressão realizada. Como necessidade da Functions Framework da Google Cloud Platform verificou-se que seria possível uma adaptação de forma a promover a utilização de serviços de cache, possibilitando assim o aproveitamente de processamentos prévios das funções para acelerar a resposta a pedidos futuros. Desta forma, foram implementados 3 mecanismos de caching distintos, In-Process, Out-of-Process e Network, respondendo cada um deles a diferentes necessidades e trazendo vantagens distintas entre si. Para a extração e análise de resultados foi utilizado o Apache JMeter, sendo esta uma aplicação open source para a realização de testes de carga e medidas de performance do sistema desenvolvido. O teste envolve a execução de uma função de geração de thumbnails a partir de uma imagem, estando a função em execução na framework. Para este caso uma das métricas definidas e analisadas será o número de pedidos atendidos por segundo até atingir o ponto de saturação. Finalmente, e a partir dos resultados foi possível verificar uma melhoria significativa dos tempos de resposta aos pedidos recorrendo aos mecanismos de caching. Para o caso de estudo, foi também possível compreender as diferenças no processamento de imagens com dimensão pequena, média e grande na ordem dos Kbs aos poucos Mbs.The increase in popularity of code processing and execution in the Cloud led to the awakening of interest in Google’s Functions Framework, with the main objective being to identify possible improvement points in the platform and its adaptation in order to respond to the identified need, also obtaining and analysing the results in order to validate the progress made. As a need for the Google Cloud Platform Functions Framework, it was found that an
adaptation would be possible in order to promote the use of cache services, thus making it possible to take advantage of previous processing of the functions to accelerate the response to future requests. In this way, 3 different caching mechanisms were implemented, In-Process, Out-of-Process and Network, each responding to different needs and bringing different advantages. For the extraction and analysis of results, Apache JMeter was used, which is an open source application for implementing load tests and performance measures of the developed system. The test involves executing a function to generate thumbnails from an image, with the function running in the framework. For this case, one of the metrics defined and analyzed will be the number of requests served per second until reaching the saturation point. Finally, and based on the results, it was possible to verify a significant improvement in the response times to requests using caching mechanisms. For the case study, it was also possible to understand the differences in the processing of images with small, medium and large dimensions in the order of Kbs to a few Mbs.N/
A network paradigm for very high capacity mobile and fixed telecommunications ecosystem sustainable evolution
For very high capacity networks (VHC), the main objective is to improve the
quality of the end-user experience. This implies compliance with key
performance indicators (KPIs) required by applications. Key performance
indicators at the application level are throughput, download time, round trip
time, and video delay. They depend on the end-to-end connection between the
server and the end-user device. For VHC networks, Telco operators must provide
the required application quality. Moreover, they must meet the objectives of
economic sustainability. Today, Telco operators rarely achieve the above
objectives, mainly due to the push to increase the bit-rate of access networks
without considering the end-to-end KPIs of the applications. The main
contribution of this paper concerns the definition of a deployment framework to
address performance and cost issues for VHC networks. We show three actions on
which it is necessary to focus. First, limiting bit-rate through video
compression. Second, contain the rate of packet loss through artificial
intelligence algorithms for line stabilization. Third, reduce latency (i.e.,
round-trip time) with edge-cloud computing. The concerted and gradual
application of these measures can allow a Telco to get out of the
ultra-broadband "trap" of the access network, as defined in the paper. We
propose to work on end-to-end optimization of the bandwidth utilization ratio.
This leads to a better performance experienced by the end-user. It also allows
a Telco operator to create new business models and obtain new revenue streams
at a sustainable cost. To give a clear example, we describe how to realize
mobile virtual and augmented reality, which is one of the most challenging
future services.Comment: 42 pages, 4 tables, 6 figures. v2: Revised Englis
Managing Constrained Devices into the Cloud: a RESTful web service
We present a RESTful web application capable to provide high level, easy-to-reach interfaces for the interaction with CoAP sensor networks. We describe how virtual instances of physical devices are created in order to become a smart entry point for querying network objects. We explain how to exploit virtualization to lighten the workload of a physical network. We focus on the implementation of the application taking into consideration aspects such as scalability, responsiveness and availabilit
Model-driven dual caching For nomadic service-oriented architecture clients
Mobile devices have evolved over the years from resource constrained devices that supported only the most basic tasks to powerful handheld computing devices. However, the most significant step in the evolution of mobile devices was the introduction of wireless connectivity which enabled them to host applications that require internet connectivity such as email, web browsers and maybe most importantly smart/rich clients. Being able to host smart clients allows the users of mobile devices to seamlessly access the Information Technology (IT) resources of their organizations. One increasingly popular way of enabling access to IT resources is by using Web Services (WS). This trend has been aided by the rapid availability of WS packages/tools, most notably the efforts of the Apache group and Integrated Development Environment (IDE) vendors. But the widespread use of WS raises questions for users of mobile devices such as laptops or PDAs; how and if they can participate in WS. Unlike their “wired” counterparts (desktop computers and servers) they rely on a wireless network that is characterized by low bandwidth and unreliable connectivity.The aim of this thesis is to enable mobile devices to host Web Services consumers. It introduces a Model-Driven Dual Caching (MDDC) approach to overcome problems arising from temporarily loss of connectivity and fluctuations in bandwidth
De-ossifying the Internet Transport Layer : A Survey and Future Perspectives
ACKNOWLEDGMENT The authors would like to thank the anonymous reviewers for their useful suggestions and comments.Peer reviewedPublisher PD
A Scalable Cluster-based Infrastructure for Edge-computing Services
In this paper we present a scalable and dynamic intermediary infrastruc-
ture, SEcS (acronym of BScalable Edge computing Services’’), for developing and
deploying advanced Edge computing services, by using a cluster of heterogeneous
machines. Our goal is to address the challenges of the next-generation Internet
services: scalability, high availability, fault-tolerance and robustness, as well as
programmability and quick prototyping. The system is written in Java and is based
on IBM’s Web Based Intermediaries (WBI) [71] developed at IBM Almaden
Research Center
- …