4,887 research outputs found

    HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges

    Full text link
    High Performance Computing (HPC) clouds are becoming an alternative to on-premise clusters for executing scientific applications and business analytics services. Most research efforts in HPC cloud aim to understand the cost-benefit of moving resource-intensive applications from on-premise environments to public cloud platforms. Industry trends show hybrid environments are the natural path to get the best of the on-premise and cloud resources---steady (and sensitive) workloads can run on on-premise resources and peak demand can leverage remote resources in a pay-as-you-go manner. Nevertheless, there are plenty of questions to be answered in HPC cloud, which range from how to extract the best performance of an unknown underlying platform to what services are essential to make its usage easier. Moreover, the discussion on the right pricing and contractual models to fit small and large users is relevant for the sustainability of HPC clouds. This paper brings a survey and taxonomy of efforts in HPC cloud and a vision on what we believe is ahead of us, including a set of research challenges that, once tackled, can help advance businesses and scientific discoveries. This becomes particularly relevant due to the fast increasing wave of new HPC applications coming from big data and artificial intelligence.Comment: 29 pages, 5 figures, Published in ACM Computing Surveys (CSUR

    Visual Localisation of Mobile Devices in an Indoor Environment under Network Delay Conditions

    Get PDF
    Current progresses in home automation and service robotic environment have highlighted the need to develop interoperability mechanisms that allow a standard communication between the two systems. During the development of the DHCompliant protocol, the problem of locating mobile devices in an indoor environment has been investigated. The communication of the device with the location service has been carried out to study the time delay that web services offer in front of the sockets. The importance of obtaining data from real-time location systems portends that a basic tool for interoperability, such as web services, can be ineffective in this scenario because of the delays added in the invocation of services. This paper is focused on introducing a web service to resolve a coordinates request without any significant delay in comparison with the sockets

    Social-aware hybrid mobile offloading

    Get PDF
    Mobile offloading is a promising technique to aid the constrained resources of a mobile device. By offloading a computational task, a device can save energy and increase the performance of the mobile applications. Unfortunately, in existing offloading systems, the opportunistic moments to offload a task are often sporadic and short-lived. We overcome this problem by proposing a social-aware hybrid offloading system (HyMobi), which increases the spectrum of offloading opportunities. As a mobile device is always co- located to at least one source of network infrastructure throughout of the day, by merging cloudlet, device-to-device and remote cloud offloading, we increase the availability of offloading support. Integrating these systems is not trivial. In order to keep such coupling, a strong social catalyst is required to foster user's participation and collaboration. Thus, we equip our system with an incentive mechanism based on credit and reputation, which exploits users' social aspects to create offload communities. We evaluate our system under controlled and in-the-wild scenarios. With credit, it is possible for a device to create opportunistic moments based on user's present need. As a result, we extended the widely used opportunistic model with a long-term perspective that significantly improves the offloading process and encourages unsupervised offloading adoption in the wild

    Network-aware Evaluation Environment for Reputation Systems

    Get PDF
    Parties of reputation systems rate each other and use ratings to compute reputation scores that drive their interactions. When deciding which reputation model to deploy in a network environment, it is important to find the most suitable model and to determine its right initial configuration. This calls for an engineering approach for describing, implementing and evaluating reputation systems while taking into account specific aspects of both the reputation systems and the networked environment where they will run. We present a software tool (NEVER) for network-aware evaluation of reputation systems and their rapid prototyping through experiments performed according to user-specified parameters. To demonstrate effectiveness of NEVER, we analyse reputation models based on the beta distribution and the maximum likelihood estimation

    Measuring and Managing Answer Quality for Online Data-Intensive Services

    Full text link
    Online data-intensive services parallelize query execution across distributed software components. Interactive response time is a priority, so online query executions return answers without waiting for slow running components to finish. However, data from these slow components could lead to better answers. We propose Ubora, an approach to measure the effect of slow running components on the quality of answers. Ubora randomly samples online queries and executes them twice. The first execution elides data from slow components and provides fast online answers; the second execution waits for all components to complete. Ubora uses memoization to speed up mature executions by replaying network messages exchanged between components. Our systems-level implementation works for a wide range of platforms, including Hadoop/Yarn, Apache Lucene, the EasyRec Recommendation Engine, and the OpenEphyra question answering system. Ubora computes answer quality much faster than competing approaches that do not use memoization. With Ubora, we show that answer quality can and should be used to guide online admission control. Our adaptive controller processed 37% more queries than a competing controller guided by the rate of timeouts.Comment: Technical Repor

    Vision: a Lightweight Computing Model for Fine-Grained Cloud Computing

    Get PDF
    Cloud systems differ fundamentally in how they offer and charge for resources. While some systems provide a generic programming abstraction at coarse granularity, e.g., a virtual machine rented by the hour, others offer specialized abstractions with fine-grained accounting on a per-request basis. In this paper, we explore Tasklets, an abstraction for instances of short-duration, generic computations that migrate from a host requiring computation to hosts that are willing to provide computation. Tasklets enable fine-grained accounting of resource usage, enabling us to build infrastructure that supports trading computing resources according to various economic models. This computation model is especially attractive in settings where mobile devices can utilize resources in the cloud to mitigate local resource constraints
    corecore