4 research outputs found

    PriorityNet App: A mobile application for establishing priorities in the context of 5G ultra-dense networks

    Get PDF
    The devices and implementations of 5G networks are continuously improving, and people will probably use them daily in the near future. 5G networks will support ultra-dense networks. In the literature, several works apply 5G networks in smart cities and smart houses. One of the most common features of these works is to use priorities in tasks, such as the management of electrical consumption at houses, waste collection in cities, or pathfinding in self-driving cars. The proper management of priorities facilitates that urgent service requests are rapidly attended. However, to the best of our knowledge, the literature lacks appropriate mechanisms for considering users’ priorities in the 5G ultra-dense networks. In this context, we propose a mobile application that allows citizens to request smart city services with different priority levels. The experiments showed the high performance of the app and its scalability when increasing priority list sizes. This app obtained 72.3% of usability in the system usability scale and 82.9% in the ease-of-use dimension of the usefulness, satisfaction, and ease of use questionnaire

    Efficient Scalable Computing through Flexible Applications and Adaptive Workloads

    Get PDF
    In this paper we introduce a methodology for dynamic job reconfiguration driven by the programming model runtime in collaboration with the global resource manager. We improve the system throughput by exploiting malleability techniques (in terms of number of MPI ranks) through the reallocation of resources assigned to a job during its execution. In our proposal, the OmpSs runtime reconfigures the number of MPI ranks during the execution of an application in cooperation with the Slurm workload manager. In addition, we take advantage of OmpSs offload semantics to allow application developers deal with data redistribution. By combining these elements a job is able to expand itself in order to exploit idle nodes or be shrunk if other queued jobs could be initiated. This novel approach adapts the system workload in order to increase the throughput as well as make a smarter use of the underlying resources. Our experiments demonstrate that this approach can reduce the total execution time of a practical workload by more than 40% while reducing the amount of resources by 30%.This work is supported by the Project TIN2014-53495-R and TIN2015-65316-P from MINECO and FEDER. Antonio J. Peña is cofinanced by MINECO under Juan de la Cierva fellowship number IJCI-2015-23266. Special thanks to José I. Aliaga for the conjugate gradient code.Peer ReviewedPostprint (author's final draft

    Combining malleability and I/O control mechanisms to enhance the execution of multiple applications

    Get PDF
    This work presents a common framework that integrates CLARISSE, a cross-layer runtime for the I/O software stack, and FlexMPI, a runtime that provides dynamic load balancing and malleability capabilities for MPI applications. This integration is performed both at application level, as libraries executed within the application, as well as at central-controller level, as external components that manage the execution of different applications. We show that a cooperation between both runtimes provides important benefits for overall system performance: first, by means of monitoring, the CPU, communication and I/O performances of all executing applications are collected, providing a holistic view of the complete platform utilization. Secondly, we introduce a coordinated way of using CLARISSE and FlexMPI control mechanisms, based on two different optimization strategies, with the aim of improving both the application I/O and overall system performance. Finally, we present a detailed description of this proposal, as well as an empirical evaluation of the framework on a cluster showing significant performance improvements at both application and wide-platform levels. We demonstrate that with this proposal the overall I/O time of an application can be reduced by up to 49% and the aggregated FLOPS of all running applications can be increased by 10% with respect to the baseline case. (C) 2018 Elsevier Inc. All rights reserved.The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work has been partially supported by the Spanish “Ministerio de Economia y Competitividad” under the project grant TIN2016-79637-P “Towards Unification of HPC and Big Data paradigms” and EU under the COST Program Action IC1305, Network for Sustainable Ultrascale Computing (NESUS)
    corecore