32 research outputs found

    Extending Amdahl's Law for the Cloud Computing Era

    Get PDF
    By extending Amdahl's law, software developers can weigh the pros and cons of moving their applications to the cloud.Ministerio de Economía y Competitividad TEC2012-37868-C04-02/0

    The dark side of network functions virtualization: A perspective on the technological sustainability

    Get PDF
    The Network Functions Virtualization (NFV) paradigm is undoubtedly a key technological advancement in the Information and Communication Technology (ICT) community, especially for the upcoming 5G network design. While most of its promise is quite straightforward, the implied reduction of the power consumption/carbon footprint is still debatable, and not in line with the energy efficiency perspective forecasted by the ETSI NFV working group (WG). In this paper, we provide an estimate of the possible future requirements of this upcoming technology when deployed according to the virtual Evolved Packet Core (vEPC) use case specified by the ETSI NFV WG. Our estimation is based on real performance levels, certified by independent third-party laboratories, and datasheet values provided by existing commercial products for both the legacy and NFV network architectures, under different deployment scenarios. Obtained results show that a massive deployment of the current NFV technologies in the EPC may lead to a minimum increase of 106 % in the carbon footprint/energy consumption with respect to the Business As Usual (BAU) network solutions. Moreover, these values tend to increase at a very high pace when the most suitable software/hardware combination is not applied, or when packet processing latency is taken into account

    Energy Awareness and Scheduling in Mobile Devices and High End Computing

    Get PDF
    In the context of the big picture as energy demands rise due to growing economies and growing populations, there will be greater emphasis on sustainable supply, conservation, and efficient usage of this vital resource. Even at a smaller level, the need for minimizing energy consumption continues to be compelling in embedded, mobile, and server systems such as handheld devices, robots, spaceships, laptops, cluster servers, sensors, etc. This is due to the direct impact of constrained energy sources such as battery size and weight, as well as cooling expenses in cluster-based systems to reduce heat dissipation. Energy management therefore plays a paramount role in not only hardware design but also in user-application, middleware and operating system design. At a higher level Datacenters are sprouting everywhere due to the exponential growth of Big Data in every aspect of human life, the buzz word these days is Cloud computing. This dissertation, focuses on techniques, specifically algorithmic ones to scale down energy needs whenever the system performance can be relaxed. We examine the significance and relevance of this research and develop a methodology to study this phenomenon. Specifically, the research will study energy-aware resource reservations algorithms to satisfy both performance needs and energy constraints. Many energy management schemes focus on a single resource that is dedicated to real-time or nonreal-time processing. Unfortunately, in many practical systems the combination of hard and soft real-time periodic tasks, a-periodic real-time tasks, interactive tasks and batch tasks must be supported. Each task may also require access to multiple resources. Therefore, this research will tackle the NP-hard problem of providing timely and simultaneous access to multiple resources by the use of practical abstractions and near optimal heuristics aided by cooperative scheduling. We provide an elegant EAS model which works across the spectrum which uses a run-profile based approach to scheduling. We apply this model to significant applications such as BLAT and Assembly of gene sequences in the Bioinformatics domain. We also provide a simulation for extending this model to cloud computing to answers “what if” scenario questions for consumers and operators of cloud resources to help answers questions of deadlines, single v/s distributed cluster use and impact analysis of energy-index and availability against revenue and ROI

    Estudio y evaluación de plataformas de distribución de cómputo intensivo sobre sistemas externos para sistemas empotrados.

    Get PDF
    Falta palabras claveNowadays, the capabilities of current embedded systems are constantly increasing, having a wide range of applications. However, there are a plethora of intensive computing tasks that, because of their hardware limitations, are unable to perform successfully. Moreover, there are innumerable tasks with strict deadlines to meet (e.g. Real Time Systems). Because of that, the use of external platforms with sufficient computing power is becoming widespread, especially thanks to the advent of Cloud Computing in recent years. Its use for knowledge sharing and information storage has been demonstrated innumerable times in the literature. However, its core properties, such as dynamic scalability, energy efficiency, infinite resources... amongst others, also make it the perfect candidate for computation off-loading. In this sense, this thesis demonstrates this fact in applying Cloud Computing in the area of Robotics (Cloud Robotics). This is done by building a 3D Point Cloud Extraction Platform, where robots can offload the complex stereo vision task of obtaining a 3D Point Cloud (3DPC) from Stereo Frames. In addition to this, the platform was applied to a typical robotics application: a Navigation Assistant. Using this case, the core challenges of computation offloading were thoroughly analyzed: the role of communication technologies (with special focus on 802.11ac), the role of offloading models, how to overcome the problem of communication delays by using predictive time corrections, until what extent offloading is a better choice compared to processing on board... etc. Furthermore, real navigation tests were performed, showing that better navigation results are obtained when using computation offloading. This experience was a starting point for the final research of this thesis: an extension of Amdahl’s Law for Cloud Computing. This will provide a better understanding of Computation Offloading’s inherent factors, especially focused on time and energy speedups. In addition to this, it helps to make some predictions regarding the future of Cloud Computing and computation offloading

    SCMFTS: Scalable and Distributed Complexity Measures and Features for Univariate and Multivariate Time Series in Big Data Environments

    Get PDF
    This research has been partially funded by the following grants: TIN2016-81113-R from the Spanish Ministry of Economy and Competitiveness, P12-TIC-2985 and P18-TP-5168 from Andalusian Regional Government, Spain, and EU Commission with FEDER funds. Francisco J. Baldan holds the FPI grant BES-2017-080137 from the Spanish Ministry of Economy and Competitiveness. D. Peralta is a Postdoctoral Fellow of the Research Foundation of Flanders (170303/12X1619N). Y. Saeys is an ISAC Marylou Ingram Scholar.Time series data are becoming increasingly important due to the interconnectedness of the world. Classical problems, which are getting bigger and bigger, require more and more resources for their processing, and Big Data technologies offer many solutions. Although the principal algorithms for traditional vector-based problems are available in Big Data environments, the lack of tools for time series processing in these environments needs to be addressed. In this work, we propose a scalable and distributed time series transformation for Big Data environments based on well-known time series features (SCMFTS), which allows practitioners to apply traditional vector-based algorithms to time series problems. The proposed transformation, along with the algorithms available in Spark, improved the best results in the state-of-the-art on the Wearable Stress and Affect Detection dataset, which is the biggest publicly available multivariate time series dataset in the University of California Irvine (UCI) Machine Learning Repository. In addition, SCMFTS showed a linear relationship between its runtime and the number of processed time series, demonstrating a linear scalable behavior, which is mandatory in Big Data environments. SCMFTS has been implemented in the Scala programming language for the Apache Spark framework, and the code is publicly available.Spanish Government TIN2016-81113-R BES-2017-080137Andalusian Regional Government, Spain P12-TIC-2985 P18-TP-5168European Commission European Commission Joint Research Centre European Commissio

    GPUs as Storage System Accelerators

    Full text link
    Massively multicore processors, such as Graphics Processing Units (GPUs), provide, at a comparable price, a one order of magnitude higher peak performance than traditional CPUs. This drop in the cost of computation, as any order-of-magnitude drop in the cost per unit of performance for a class of system components, triggers the opportunity to redesign systems and to explore new ways to engineer them to recalibrate the cost-to-performance relation. This project explores the feasibility of harnessing GPUs' computational power to improve the performance, reliability, or security of distributed storage systems. In this context, we present the design of a storage system prototype that uses GPU offloading to accelerate a number of computationally intensive primitives based on hashing, and introduce techniques to efficiently leverage the processing power of GPUs. We evaluate the performance of this prototype under two configurations: as a content addressable storage system that facilitates online similarity detection between successive versions of the same file and as a traditional system that uses hashing to preserve data integrity. Further, we evaluate the impact of offloading to the GPU on competing applications' performance. Our results show that this technique can bring tangible performance gains without negatively impacting the performance of concurrently running applications.Comment: IEEE Transactions on Parallel and Distributed Systems, 201
    corecore