21 research outputs found
Uncertainty analysis in the Model Web
This thesis provides a set of tools for managing uncertainty in Web-based models and workflows.To support the use of these tools, this thesis firstly provides a framework for exposing models through Web services. An introduction to uncertainty management, Web service interfaces,and workflow standards and technologies is given, with a particular focus on the geospatial domain.An existing specification for exposing geospatial models and processes, theWeb Processing Service (WPS), is critically reviewed. A processing service framework is presented as a solutionto usability issues with the WPS standard. The framework implements support for Simple ObjectAccess Protocol (SOAP), Web Service Description Language (WSDL) and JavaScript Object Notation (JSON), allowing models to be consumed by a variety of tools and software. Strategies for communicating with models from Web service interfaces are discussed, demonstrating the difficultly of exposing existing models on the Web. This thesis then reviews existing mechanisms for uncertainty management, with an emphasis on emulator methods for building efficient statistical surrogate models. A tool is developed to solve accessibility issues with such methods, by providing a Web-based user interface and backend to ease the process of building and integrating emulators. These tools, plus the processing service framework, are applied to a real case study as part of the UncertWeb project. The usability of the framework is proved with the implementation of aWeb-based workflow for predicting future crop yields in the UK, also demonstrating the abilities of the tools for emulator building and integration. Future directions for the development of the tools are discussed
Recommended from our members
Optimising data centre operation by removing the transport bottleneck
Data centres lie at the heart of almost every service on the Internet. Data centres are used to provide search results, to power social media, to store and index email, to host “cloud” applications, for online retail and to provide a myriad of other web services. Consequently the more efficient they can be made the better for all of us. The power of modern data centres is in combining commodity off-the-shelf server hardware and network equipment to provide what Google’s Barrosso and Ho ̈lzle describe as “warehouse scale” computers.
Data centres rely on TCP, a transport protocol that was originally designed for use in the Internet. Like other such protocols, TCP has been optimised to maximise throughput, usually by filling up queues at the bottleneck. However, for most applications within a data centre network latency is more critical than throughput. Consequently the choice of transport protocol becomes a bottleneck for performance. My thesis is that the solution to this is to move away from the use of one-size-fits-all transport protocols towards ones that have been designed to reduce latency across the data centre and which can dynamically respond to the needs of the applications.
This dissertation focuses on optimising the transport layer in data centre networks. In particular I address the question of whether any single transport mechanism can be flexible enough to cater to the needs of all data centre traffic. I show that one leading protocol (DCTCP) has been heavily optimised for certain network conditions. I then explore approaches that seek to minimise latency for applications that care about it while still allowing throughput-intensive applications to receive a good level of service. My key contributions to this are Silo and Trevi.
Trevi is a novel transport system for storage traffic that utilises fountain coding to max- imise throughput and minimise latency while being agnostic to drop, thus allowing storage traffic to be pushed out of the way when latency sensitive traffic is present in the network. Silo is an admission control system that is designed to give tenants of a multi-tenant data centre guaranteed low latency network performance. Both of these were developed in collaboration with others
Nomadic fog storage
Mobile services incrementally demand for further processing and storage. However,
mobile devices are known for their constrains in terms of processing, storage, and energy.
Early proposals have addressed these aspects; by having mobile devices access remote
clouds. But these proposals suffer from long latencies and backhaul bandwidth limitations
in retrieving data. To mitigate these issues, edge clouds have been proposed. Using this
paradigm, intermediate nodes are placed between the mobile devices and the remote
cloud. These intermediate nodes should fulfill the end users’ resource requests, namely
data and processing capability, and reduce the energy consumption on the mobile devices’
batteries.
But then again, mobile traffic demand is increasing exponentially and there is a greater
than ever evolution of mobile device’s available resources. This urges the use of mobile
nodes’ extra capabilities for fulfilling the requisites imposed by new mobile applications.
In this new scenario, the mobile devices should become both consumers and providers of
the emerging services. The current work researches on this possibility by designing,
implementing and testing a novel nomadic fog storage system that uses fog and mobile
nodes to support the upcoming applications. In addition, a novel resource allocation
algorithm has been developed that considers the available energy on mobile devices and
the network topology. It also includes a replica management module based on data
popularity. The comprehensive evaluation of the fog proposal has evidenced that it is
responsive, offloads traffic from the backhaul links, and enables a fair energy depletion
among mobiles nodes by storing content in neighbor nodes with higher battery autonomy.Os serviços móveis requerem cada vez mais poder de processamento e armazenamento.
Contudo, os dispositivos móveis são conhecidos por serem limitados em termos de
armazenamento, processamento e energia. Como solução, os dispositivos móveis
começaram a aceder a estes recursos através de nuvens distantes. No entanto, estas sofrem
de longas latências e limitações na largura de banda da rede, ao aceder aos recursos. Para
resolver estas questões, foram propostas soluções de edge computing. Estas, colocam nós
intermediários entre os dispositivos móveis e a nuvem remota, que são responsáveis por
responder aos pedidos de recursos por parte dos utilizadores finais.
Dados os avanços na tecnologia dos dispositivos móveis e o aumento da sua utilização,
torna-se cada mais pertinente a utilização destes próprios dispositivos para fornecer os
serviços da nuvem. Desta forma, o dispositivo móvel torna-se consumidor e fornecedor
do serviço nuvem. O trabalho atual investiga esta vertente, implementado e testando um
sistema que utiliza dispositivos móveis e nós no “fog”, para suportar os serviços móveis
emergentes. Foi ainda implementado um algoritmo de alocação de recursos que considera
os níveis de energia e a topologia da rede, bem como um módulo que gere a replicação
de dados no sistema de acordo com a sua popularidade. Os resultados obtidos provam que
o sistema é responsivo, alivia o tráfego nas ligações no core, e demonstra uma distribuição
justa do consumo de energia no sistema através de uma disseminação eficaz de conteúdo
nos nós da periferia da rede mais próximos dos nós consumidores
Towards auto-scaling in the cloud: online resource allocation techniques
Cloud computing provides an easy access to computing resources. Customers can acquire and release resources any time. However, it is not trivial to determine when and how many resources to allocate. Many applications running in the cloud face workload changes that affect their resource demand. The first thought is to plan capacity either for the average load or for the peak load. In the first case there is less cost incurred, but performance will be affected if the peak load occurs. The second case leads to money wastage, since resources will remain underutilized most of the time. Therefore there is a need for a more sophisticated resource provisioning techniques that can automatically scale the application resources according to workload demand and performance constrains.
Large cloud providers such as Amazon, Microsoft, RightScale provide auto-scaling services. However, without the proper configuration and testing such services can do more harm than good. In this work I investigate application specific online resource allocation techniques that allow to dynamically adapt to incoming workload, minimize the cost of virtual resources and meet user-specified performance objectives
Building the Future Internet through FIRE
The Internet as we know it today is the result of a continuous activity for improving network communications, end user services, computational processes and also information technology infrastructures. The Internet has become a critical infrastructure for the human-being by offering complex networking services and end-user applications that all together have transformed all aspects, mainly economical, of our lives. Recently, with the advent of new paradigms and the progress in wireless technology, sensor networks and information systems and also the inexorable shift towards everything connected paradigm, first as known as the Internet of Things and lately envisioning into the Internet of Everything, a data-driven society has been created. In a data-driven society, productivity, knowledge, and experience are dependent on increasingly open, dynamic, interdependent and complex Internet services. The challenge for the Internet of the Future design is to build robust enabling technologies, implement and deploy adaptive systems, to create business opportunities considering increasing uncertainties and emergent systemic behaviors where humans and machines seamlessly cooperate
Building the Future Internet through FIRE
The Internet as we know it today is the result of a continuous activity for improving network communications, end user services, computational processes and also information technology infrastructures. The Internet has become a critical infrastructure for the human-being by offering complex networking services and end-user applications that all together have transformed all aspects, mainly economical, of our lives. Recently, with the advent of new paradigms and the progress in wireless technology, sensor networks and information systems and also the inexorable shift towards everything connected paradigm, first as known as the Internet of Things and lately envisioning into the Internet of Everything, a data-driven society has been created. In a data-driven society, productivity, knowledge, and experience are dependent on increasingly open, dynamic, interdependent and complex Internet services. The challenge for the Internet of the Future design is to build robust enabling technologies, implement and deploy adaptive systems, to create business opportunities considering increasing uncertainties and emergent systemic behaviors where humans and machines seamlessly cooperate
Internet of Things From Hype to Reality
The Internet of Things (IoT) has gained significant mindshare, let alone attention, in academia and the industry especially over the past few years. The reasons behind this interest are the potential capabilities that IoT promises to offer. On the personal level, it paints a picture of a future world where all the things in our ambient environment are connected to the Internet and seamlessly communicate with each other to operate intelligently. The ultimate goal is to enable objects around us to efficiently sense our surroundings, inexpensively communicate, and ultimately create a better environment for us: one where everyday objects act based on what we need and like without explicit instructions
Scalable hosting of web applications
Modern Web sites have evolved from simple monolithic systems to complex multitiered systems. In contrast to traditional Web sites, these sites do not simply deliver pre-written content but dynamically generate content using (one or more) multi-tiered Web applications. In this thesis, we addressed the question: How to host multi-tiered Web applications in a scalable manner? Scaling up a Web application requires scaling its individual tiers. To this end, various research works have proposed techniques that employ replication or caching solutions at different tiers. However, most of these techniques aim to optimize the performance of individual tiers and not the entire application. A key observation made in our research is that there exists no elixir technique that performs the best for allWeb applications. Effective hosting of a Web application requires careful selection and deployment of several techniques at different tiers. To this end, we present several caching and replication strategies, such as GlobeCBC, GlobeDB and GlobeTP, to improve the scalability of different tiers of a Web application. While these techniques and systems improve the performance of the individual tiers (and eventually the application), an application's administrator is not only interested in the performance of its individual tiers but also in its endto- end performance. To this end, we propose a resource provisioning approach that allows us to choose the best resource configuration for hosting a Web application such that its end-to-end response time can be optimized with minimum usage of resources. The proposed approach is based on an analytical model for multi-tier systems, which allows us to derive expressions for estimating the mean end-to-end response time and its variance.Steen, M.R. van [Promotor]Pierre, G.E.O. [Copromotor