470 research outputs found
CERN openlab Whitepaper on Future IT Challenges in Scientific Research
This whitepaper describes the major IT challenges in scientific research at CERN and several other European and international research laboratories and projects. Each challenge is exemplified through a set of concrete use cases drawn from the requirements of large-scale scientific programs. The paper is based on contributions from many researchers and IT experts of the participating laboratories and also input from the existing CERN openlab industrial sponsors. The views expressed in this document are those of the individual contributors and do not necessarily reflect the view of their organisations and/or affiliates
The Interplay of Reward and Energy in Real-Time Systems
This work contends that three constraints need to be addressed in the context of power-aware real-time systems: energy, time and task rewards/values. These issues are studied for two types of systems. First, embedded systems running applications that will include temporal requirements (e.g., audio and video). Second, servers and server clusters that have timing constraints and Quality of Service (QoS) requirements implied by the application being executed (e.g., signal processing, audio/video streams, webpages). Furthermore, many future real-time systems will rely on different software versions to achieve a variety of QoS-aware tradeoffs, each with different rewards, time and energy requirements.For hard real-time systems, solutions are proposed that maximize the system reward/profit without exceeding the deadlines and without depleting the energy budget (in portable systems the energy budget is determined by the battery charge, while in server farms it is dependent on the server architecture and heat/cooling constraints). Both continuous and discrete reward and power models are studied, and the reward/energy analysis is extended with multiple task versions, optional/mandatory tasks and long-term reward maximization policies.For soft real-time systems, the reward model is relaxed into a QoS constraint, and stochastic schemes are first presented for power management of systems with unpredictable workloads. Then, load distribution and power management policies are addressed in the context of servers and homogeneous server farms. Finally, the work is extended with QoS-aware local and global policies for the general case of heterogeneous systems
An efficient use of virtualization in grid/cloud environments
Grid is a hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational resources. Grid enables access to the resources but it does not guarantee any quality of service. Moreover, Grid does not provide performance isolation; job of one user can influence the performance of other user's job. The other problem with Grid is that the users of Grid belong to scientific community and the jobs require specific and customized software environment. Providing the perfect environment to the user is very difficult in Grid for its dispersed and heterogeneous nature. Though, Cloud computing provide full customization and control, but there is no simple procedure available to submit user jobs as in Grid. The Grid computing can provide customized resources and performance to the user using virtualization. A virtual machine can join the Grid as an execution node. The virtual machine can also be submitted as a job with user jobs inside. Where the first method gives quality of service and performance isolation, the second method also provides customization and administration in addition. In this thesis, a solution is proposed to enable virtual machine reuse which will provide performance isolation with customization and administration. The same virtual machine can be used for several jobs. In the proposed solution customized virtual machines join the Grid pool on user request. Proposed solution describes two scenarios to achieve this goal. In first scenario, user submits their customized virtual machine as a job. The virtual machine joins the Grid pool when it is powered on. In the second scenario, user customized virtual machines are preconfigured in the execution system. These virtual machines join the Grid pool on user request. Condor and VMware server is used to deploy and test the scenarios. Condor supports virtual machine jobs. The scenario 1 is deployed using Condor VM universe. The second scenario uses VMware-VIX API for scripting powering on and powering off of the remote virtual machines. The experimental results shows that as scenario 2 does not need to transfer the virtual machine image, the virtual machine image becomes live on pool more faster. In scenario 1, the virtual machine runs as a condor job, so it easy to administrate the virtual machine. The only pitfall in scenario 1 is the network traffic
Recommended from our members
Design and development of an SDN robotic system with intelligent openflow IOT testbeds for power assessment, prediction and fault management
This thesis was submitted for the award of Docctor of Philosophy and was awarded by Brunel University LondonCurrent wind turbine and power grid industry have relatively little research and
development with regards to implementing novel communication network and intel-
ligent system to overcome issues that pertain to network failure and lack of monitor-
ing. Wind turbine location could be a big concern when it comes to identifying an
efficient location for future wind turbine and the impact of a site with non-efficient
meteorological parameters can result in relocation of a wind turbine and revenue-
loss. Unplanned wind turbine shutdowns that are considered to be one of the major
revenue-loss factors of a modern wind farm business. Typically, the unplanned wind
turbine shutdown is a result of sensors fail due to harsh environment challenges that
prevent hardware status from being available on the monitoring system. The above
mentioned research problems pertain to wind turbine site assessment and predic-
tion of power. In this thesis, a novel programmable software-defined robotics and
IoT testbeds are proposed with the fusion of Artificial Intelligence and optimiza-
tion methods to solve specific problems related to wind turbine site assessment and
fault management. The site selection process is implemented using proposed aerial
and ground robotic systems that are incorporated with Software-Defined Networks
and OpenFlow switching capabilities. A second stage development of the system is
proposing a prediction platform that run on the aerial robot cluster using neural net-
works optimization regression techniques. To overcome the unplanned wind turbine
network outage, an IoT micro cloud cluster system is proposed that act as immedi-
ate fail-over platform to provide continuous health readings of the wind turbine to
ensure the turbine in question will not get shutdown unnecessarily. The proposed
system help in minimizing revenue-loss caused by stopping a wind turbine from op-
eration and help maintain generated power stability on the grid. Additionally, since
large wind farms require an agile and scalable management of selecting the most
efficient wind turbine location install. Thus, a softwarized cognitive routing proto-
col is proposed. The group of quadcopters is a redundant failover Software-Defined
Network/OpenFlow system that can cover every single way point of the farm land.
Although, power consumption is essential for the continuity the service, a Software-
Defined charging system testbed is proposed that uses inductive power transfer wit
Economic regulation for multi tenant infrastructures
Large scale computing infrastructures need scalable and effi cient resource allocation mechanisms to ful l the requirements of its participants and applications while the whole system is regulated to work e ciently. Computational markets provide e fficient allocation mechanisms that aggregate information from multiple sources in large, dynamic and complex systems where there is not a single source with complete information. They have been proven to be successful in matching resource demand and resource supply in the presence of sel sh multi-objective and utility-optimizing users and sel sh pro t-optimizing providers. However, global infrastructure metrics which may not directly affect participants of the computational market still need to be addressed -a.k.a. economic externalities like load balancing or energy-efficiency.
In this thesis, we point out the need to address these economic externalities, and we design and evaluate appropriate regulation mechanisms from di erent perspectives on top of existing economic models, to incorporate a wider range of objective metrics not considered otherwise. Our main contributions in this thesis are threefold; fi rst, we propose a taxation mechanism that addresses the resource congestion problem e ffectively improving the balance of load among resources when correlated economic preferences are present; second,
we propose a game theoretic model with complete information to derive an algorithm to aid resource providers to scale up and down resource supply so energy-related costs can be reduced; and third, we relax our previous assumptions about complete information on the resource provider side and design an incentive-compatible mechanism to encourage users to truthfully report their resource requirements effectively assisting providers to make energy-eff cient allocations while providing a dynamic allocation mechanism to users.Les infraestructures computacionals de gran escala necessiten mecanismes d’assignació de recursos escalables i eficients per complir amb els requisits computacionals de tots els seus participants, assegurant-se de que el sistema és regulat apropiadament per a que funcioni de manera efectiva. Els mercats computacionals són mecanismes d’assignació de recursos eficients que incorporen informació de diferents fonts considerant sistemes de gran escala, complexos i dinà mics on no existeix una única font que proveeixi informació completa de l'estat del sistema. Aquests mercats computacionals han demostrat ser exitosos per acomodar la demanda de recursos computacionals amb la seva oferta quan els seus participants son considerats estratègics des del punt de vist de teoria de jocs. Tot i això existeixen mètriques a nivell global sobre la infraestructura que no tenen per que influenciar els usuaris a priori de manera directa. Aixà doncs, aquestes externalitats econòmiques com poden ser el balanceig de cà rrega o la eficiència energètica, conformen una lÃnia d’investigació que cal explorar. En aquesta tesi, presentem i descrivim la problemà tica derivada d'aquestes externalitats econòmiques. Un cop establert el marc d’actuació, dissenyem i avaluem mecanismes de regulació apropiats basats en models econòmics existents per resoldre aquesta problemà tica des de diferents punts de vista per incorporar un ventall més ampli de mètriques objectiu que no havien estat considerades fins al moment. Les nostres contribucions principals tenen tres vessants: en primer lloc, proposem un mecanisme de regulació de tipus impositiu que tracta de mitigar l’aparició de recursos sobre-explotats que, efectivament, millora el balanceig de la cà rrega de treball entre els recursos disponibles; en segon lloc, proposem un model teòric basat en teoria de jocs amb informació o completa que permet derivar un algorisme que facilita la tasca dels proveïdors de recursos per modi car a l'alça o a la baixa l'oferta de recursos per tal de reduir els costos relacionats amb el consum energètic; i en tercer lloc, relaxem la nostra assumpció prèvia sobre l’existència d’informació complerta per part del proveïdor de recursos i dissenyem un mecanisme basat en incentius per fomentar que els usuaris facin pública de manera verÃdica i explÃcita els seus requeriments computacionals, ajudant d'aquesta manera als proveïdors de recursos a fer assignacions eficients des del punt de vista energètic a la vegada que oferim un mecanisme l’assignació de recursos dinà mica als usuari
Recommended from our members
Towards Optimized Traffic Provisioning and Adaptive Cache Management for Content Delivery
Content delivery networks (CDNs) deploy hundreds of thousands of servers around the world to cache and serve trillions of user requests every day for a diverse set of content such as web pages, videos, software downloads and images. In this dissertation, we propose algorithms to provision traffic across cache servers and manage the content they host to achieve performance objectives such as maximizing the cache hit rate, minimizing the bandwidth cost of the network and minimizing the energy consumption of the servers.
Traffic provisioning is the process of determining the set of content domains hosted on the servers. We propose footprint descriptors that effectively capture the popularity characteristics and caching performance of different content classes. We also propose a footprint descriptor calculus that can be used to decide how content should be mixed or partitioned to efficiently provision traffic. To automate traffic provisioning, we propose optimization models to provision traffic such that the cache miss traffic from the network is minimized without overloading the servers. We find that such optimization models produce significant reductions in the cache miss traffic when compared with traffic provisioning algorithms in use today.
Cache management is the process of deciding how content is cached in the servers of a CDN. We propose TTL-based caching algorithms that provably achieve performance targets specified by a CDN operator. We show that the proposed algorithms converge to the target hit rate and target cache size with low error. Finally, we propose cache management algorithms to make the servers energy-efficient using disk shutdown. We find that disk shutdown is well suited for CDN servers and provides energy savings without significantly impacting cache hit rates
- …