1,150,419 research outputs found

    Foggy clouds and cloudy fogs: a real need for coordinated management of fog-to-cloud computing systems

    Get PDF
    The recent advances in cloud services technology are fueling a plethora of information technology innovation, including networking, storage, and computing. Today, various flavors have evolved of IoT, cloud computing, and so-called fog computing, a concept referring to capabilities of edge devices and users' clients to compute, store, and exchange data among each other and with the cloud. Although the rapid pace of this evolution was not easily foreseeable, today each piece of it facilitates and enables the deployment of what we commonly refer to as a smart scenario, including smart cities, smart transportation, and smart homes. As most current cloud, fog, and network services run simultaneously in each scenario, we observe that we are at the dawn of what may be the next big step in the cloud computing and networking evolution, whereby services might be executed at the network edge, both in parallel and in a coordinated fashion, as well as supported by the unstoppable technology evolution. As edge devices become richer in functionality and smarter, embedding capacities such as storage or processing, as well as new functionalities, such as decision making, data collection, forwarding, and sharing, a real need is emerging for coordinated management of fog-to-cloud (F2C) computing systems. This article introduces a layered F2C architecture, its benefits and strengths, as well as the arising open and research challenges, making the case for the real need for their coordinated management. Our architecture, the illustrative use case presented, and a comparative performance analysis, albeit conceptual, all clearly show the way forward toward a new IoT scenario with a set of existing and unforeseen services provided on highly distributed and dynamic compute, storage, and networking resources, bringing together heterogeneous and commodity edge devices, emerging fogs, as well as conventional clouds.Peer ReviewedPostprint (author's final draft

    Pornography on the Dean\u27s PC: An Ethics and Computing Case Study

    Get PDF
    A real case study in which a technician discovers pornography on an administrator\u27s personal computer is developed for use in teaching ethics and computing. The case highlights issues of employee rights and responsibilities in using employer-owned computing resources, competing responsibilities in professional codes of ethics, claims about rights to privacy and free speech, and ethical decision-making. Analysis of the case emphasizes the need for strong critical thinking skills

    Icebergs in the Clouds: the Other Risks of Cloud Computing

    Full text link
    Cloud computing is appealing from management and efficiency perspectives, but brings risks both known and unknown. Well-known and hotly-debated information security risks, due to software vulnerabilities, insider attacks, and side-channels for example, may be only the "tip of the iceberg." As diverse, independently developed cloud services share ever more fluidly and aggressively multiplexed hardware resource pools, unpredictable interactions between load-balancing and other reactive mechanisms could lead to dynamic instabilities or "meltdowns." Non-transparent layering structures, where alternative cloud services may appear independent but share deep, hidden resource dependencies, may create unexpected and potentially catastrophic failure correlations, reminiscent of financial industry crashes. Finally, cloud computing exacerbates already-difficult digital preservation challenges, because only the provider of a cloud-based application or service can archive a "live," functional copy of a cloud artifact and its data for long-term cultural preservation. This paper explores these largely unrecognized risks, making the case that we should study them before our socioeconomic fabric becomes inextricably dependent on a convenient but potentially unstable computing model.Comment: 6 pages, 3 figure

    Competing in the Clouds: A Strategic Challenge for ITSP Ltd.

    Get PDF
    By 2010, cloud computing had become established as a new model of IT provisioning for service providers. New market players and businesses emerged, threatening the business models of established market players. This teaching case explores the challenges arising through the impact of the new cloud computing technology on an established, multinational IT service provider called ITSP. Should the incumbent vendors adopt cloud computing offerings? And, if so, what form should those offerings take? The teaching case focuses on the strategic dimensions of technological developments, their threats and opportunities. It requires strategic decision making and forecasting under high uncertainty. The critical question is whether cloud computing is a disruptive technology or simply an alternative channel to supply computing resources over the Internet. The case challenges students to assess this new technology and plan ITSP’s responses

    Algorithms to Compute the Lyndon Array

    Get PDF
    We first describe three algorithms for computing the Lyndon array that have been suggested in the literature, but for which no structured exposition has been given. Two of these algorithms execute in quadratic time in the worst case, the third achieves linear time, but at the expense of prior computation of both the suffix array and the inverse suffix array of x. We then go on to describe two variants of a new algorithm that avoids prior computation of global data structures and executes in worst-case n log n time. Experimental evidence suggests that all but one of these five algorithms require only linear execution time in practice, with the two new algorithms faster by a small factor. We conjecture that there exists a fast and worst-case linear-time algorithm to compute the Lyndon array that is also elementary (making no use of global data structures such as the suffix array)

    Involutions of polynomially parametrized surfaces

    Full text link
    We provide an algorithm for detecting the involutions leaving a surface defined by a polynomial parametrization invariant. As a consequence, the symmetry axes, symmetry planes and symmetry center of the surface, if any, can be determined directly from the parametrization, without computing or making use of the implicit representation. The algorithm is based on the fact, proven in the paper, that any involution of the surface comes from an involution of the parameter space (the real plane, in our case); therefore, by determining the latter, the former can be found. The algorithm has been implemented in the computer algebra system Maple 17. Evidence of its efficiency for moderate degrees, examples and a complexity analysis are also given

    Solving Large-Scale Markov Decision Processes on Low-Power Heterogeneous Platforms

    Get PDF
    Markov Decision Processes (MDPs) provide a framework for a machine to act autonomously and intelligently in environments where the effects of its actions are not deterministic. MDPs have numerous applications. We focus on practical applications for decision making, such as autonomous driving and service robotics, that have to run on mobile platforms with scarce computing and power resources. In our study, we use Value Iteration to solve MDPs, a core method of the paradigm to find optimal sequences of actions, which is well known for its high computational cost. In order to solve these computationally complex problems efficiently in platforms with stringent power consumption constraints, high-performance accelerator hardware and parallelised software come to the rescue. We introduce a generalisable approach to implement practical applications for decision making, such as autonomous driving on mobile and embedded low-power heterogeneous SoC platforms that integrate an accelerator (GPU) with a multicore. We evaluate three scheduling strategies that enable concurrent execution and efficient use of resources on a variety of SoCs embedding a multicore CPU and integrated GPU, namely Oracle, Dynamic, and LogFit. We compare these strategies for solving an MDP modelling the use-case of autonomous robot navigation in indoor environments on four representative platforms for mobile decision-making applications with a power use ranging from 4 to 65 Watts. We provide a rigorous analysis of the results to better understand their behaviour depending on the MDP size and the computing platform. Our experimental results show that by using CPU-GPU heterogeneous strategies, the computation time and energy required are considerably reduced with respect to multicore implementation, regardless of the computational platform.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech. This work was partially supported by the Spanish project TIN 2016-80920-R
    • …
    corecore