5,737 research outputs found

    Undergraduate Catalog of Studies, 2023-2024

    Get PDF

    Intra-facility equity in discrete and continuous p-facility location problems

    Get PDF
    We consider facility location problems with a new form of equity criterion. Demand points have preference order on the sites where the plants can be located. The goal is to find the location of the facilities minimizing the envy felt by the demand points with respect to the rest of the demand points allocated to the same plant. After defining this new envy criterion and the general framework based on it, we provide formulations that model this approach in both the discrete and the continuous framework. The problems are illustrated with examples and the computational tests reported show the potential and limits of each formulation on several types of instances. Although this article is mainly focused on the introduction, modeling and formulation of this new concept of envy, some improvements for all the formulations presented are developed, obtaining in some cases better solution times.Project TED2021-130875B-I00, supported by MCIN/AEI/ 10.13039/ 501100011033 and the European Union ‘‘NextGenerationEU/PRTR’’Research project PID2022- 137818OB-I00 (Ministerio de Ciencia e Innovación, Spain)Agencia Estatal de Investigación (AEI), Spain: PID2020-114594GB-C2; Regional Government of Andalusia, Spain P18-FR-1422 and B-FQM-322-UGR20 (ERDFIMAG-Maria de Maeztu, Spain grant CEX2020-001105-M/AEI/10.13039/ 501100011033Funding for open access charge: Universidad de Granada / CBU

    Graduate Catalog of Studies, 2023-2024

    Get PDF

    Serverless Strategies and Tools in the Cloud Computing Continuum

    Full text link
    Tesis por compendio[ES] En los últimos años, la popularidad de la computación en nube ha permitido a los usuarios acceder a recursos de cómputo, red y almacenamiento sin precedentes bajo un modelo de pago por uso. Esta popularidad ha propiciado la aparición de nuevos servicios para resolver determinados problemas informáticos a gran escala y simplificar el desarrollo y el despliegue de aplicaciones. Entre los servicios más destacados en los últimos años se encuentran las plataformas FaaS (Función como Servicio), cuyo principal atractivo es la facilidad de despliegue de pequeños fragmentos de código en determinados lenguajes de programación para realizar tareas específicas en respuesta a eventos. Estas funciones son ejecutadas en los servidores del proveedor Cloud sin que los usuarios se preocupen de su mantenimiento ni de la gestión de su elasticidad, manteniendo siempre un modelo de pago por uso de grano fino. Las plataformas FaaS pertenecen al paradigma informático conocido como Serverless, cuyo propósito es abstraer la gestión de servidores por parte de los usuarios, permitiéndoles centrar sus esfuerzos únicamente en el desarrollo de aplicaciones. El problema del modelo FaaS es que está enfocado principalmente en microservicios y tiende a tener limitaciones en el tiempo de ejecución y en las capacidades de computación (por ejemplo, carece de soporte para hardware de aceleración como GPUs). Sin embargo, se ha demostrado que la capacidad de autoaprovisionamiento y el alto grado de paralelismo de estos servicios pueden ser muy adecuados para una mayor variedad de aplicaciones. Además, su inherente ejecución dirigida por eventos hace que las funciones sean perfectamente adecuadas para ser definidas como pasos en flujos de trabajo de procesamiento de archivos (por ejemplo, flujos de trabajo de computación científica). Por otra parte, el auge de los dispositivos inteligentes e integrados (IoT), las innovaciones en las redes de comunicación y la necesidad de reducir la latencia en casos de uso complejos han dado lugar al concepto de Edge computing, o computación en el borde. El Edge computing consiste en el procesamiento en dispositivos cercanos a las fuentes de datos para mejorar los tiempos de respuesta. La combinación de este paradigma con la computación en nube, formando arquitecturas con dispositivos a distintos niveles en función de su proximidad a la fuente y su capacidad de cómputo, se ha acuñado como continuo de la computación en la nube (o continuo computacional). Esta tesis doctoral pretende, por lo tanto, aplicar diferentes estrategias Serverless para permitir el despliegue de aplicaciones generalistas, empaquetadas en contenedores de software, a través de los diferentes niveles del continuo computacional. Para ello, se han desarrollado múltiples herramientas con el fin de: i) adaptar servicios FaaS de proveedores Cloud públicos; ii) integrar diferentes componentes software para definir una plataforma Serverless en infraestructuras privadas y en el borde; iii) aprovechar dispositivos de aceleración en plataformas Serverless; y iv) facilitar el despliegue de aplicaciones y flujos de trabajo a través de interfaces de usuario. Además, se han creado y adaptado varios casos de uso para evaluar los desarrollos conseguidos.[CA] En els últims anys, la popularitat de la computació al núvol ha permès als usuaris accedir a recursos de còmput, xarxa i emmagatzematge sense precedents sota un model de pagament per ús. Aquesta popularitat ha propiciat l'aparició de nous serveis per resoldre determinats problemes informàtics a gran escala i simplificar el desenvolupament i desplegament d'aplicacions. Entre els serveis més destacats en els darrers anys hi ha les plataformes FaaS (Funcions com a Servei), el principal atractiu de les quals és la facilitat de desplegament de petits fragments de codi en determinats llenguatges de programació per realitzar tasques específiques en resposta a esdeveniments. Aquestes funcions són executades als servidors del proveïdor Cloud sense que els usuaris es preocupen del seu manteniment ni de la gestió de la seva elasticitat, mantenint sempre un model de pagament per ús de gra fi. Les plataformes FaaS pertanyen al paradigma informàtic conegut com a Serverless, el propòsit del qual és abstraure la gestió de servidors per part dels usuaris, permetent centrar els seus esforços únicament en el desenvolupament d'aplicacions. El problema del model FaaS és que està enfocat principalment a microserveis i tendeix a tenir limitacions en el temps d'execució i en les capacitats de computació (per exemple, no té suport per a maquinari d'acceleració com GPU). Tot i això, s'ha demostrat que la capacitat d'autoaprovisionament i l'alt grau de paral·lelisme d'aquests serveis poden ser molt adequats per a més aplicacions. A més, la seva inherent execució dirigida per esdeveniments fa que les funcions siguen perfectament adequades per ser definides com a passos en fluxos de treball de processament d'arxius (per exemple, fluxos de treball de computació científica). D'altra banda, l'auge dels dispositius intel·ligents i integrats (IoT), les innovacions a les xarxes de comunicació i la necessitat de reduir la latència en casos d'ús complexos han donat lloc al concepte d'Edge computing, o computació a la vora. L'Edge computing consisteix en el processament en dispositius propers a les fonts de dades per millorar els temps de resposta. La combinació d'aquest paradigma amb la computació en núvol, formant arquitectures amb dispositius a diferents nivells en funció de la proximitat a la font i la capacitat de còmput, s'ha encunyat com a continu de la computació al núvol (o continu computacional). Aquesta tesi doctoral pretén, doncs, aplicar diferents estratègies Serverless per permetre el desplegament d'aplicacions generalistes, empaquetades en contenidors de programari, a través dels diferents nivells del continu computacional. Per això, s'han desenvolupat múltiples eines per tal de: i) adaptar serveis FaaS de proveïdors Cloud públics; ii) integrar diferents components de programari per definir una plataforma Serverless en infraestructures privades i a la vora; iii) aprofitar dispositius d'acceleració a plataformes Serverless; i iv) facilitar el desplegament d'aplicacions i fluxos de treball mitjançant interfícies d'usuari. A més, s'han creat i s'han adaptat diversos casos d'ús per avaluar els desenvolupaments aconseguits.[EN] In recent years, the popularity of Cloud computing has allowed users to access unprecedented compute, network, and storage resources under a pay-per-use model. This popularity led to new services to solve specific large-scale computing challenges and simplify the development and deployment of applications. Among the most prominent services in recent years are FaaS (Function as a Service) platforms, whose primary appeal is the ease of deploying small pieces of code in certain programming languages to perform specific tasks on an event-driven basis. These functions are executed on the Cloud provider's servers without users worrying about their maintenance or elasticity management, always keeping a fine-grained pay-per-use model. FaaS platforms belong to the computing paradigm known as Serverless, which aims to abstract the management of servers from the users, allowing them to focus their efforts solely on the development of applications. The problem with FaaS is that it focuses on microservices and tends to have limitations regarding the execution time and the computing capabilities (e.g. lack of support for acceleration hardware such as GPUs). However, it has been demonstrated that the self-provisioning capability and high degree of parallelism of these services can be well suited to broader applications. In addition, their inherent event-driven triggering makes functions perfectly suitable to be defined as steps in file processing workflows (e.g. scientific computing workflows). Furthermore, the rise of smart and embedded devices (IoT), innovations in communication networks and the need to reduce latency in challenging use cases have led to the concept of Edge computing. Edge computing consists of conducting the processing on devices close to the data sources to improve response times. The coupling of this paradigm together with Cloud computing, involving architectures with devices at different levels depending on their proximity to the source and their compute capability, has been coined as Cloud Computing Continuum (or Computing Continuum). Therefore, this PhD thesis aims to apply different Serverless strategies to enable the deployment of generalist applications, packaged in software containers, across the different tiers of the Cloud Computing Continuum. To this end, multiple tools have been developed in order to: i) adapt FaaS services from public Cloud providers; ii) integrate different software components to define a Serverless platform on on-premises and Edge infrastructures; iii) leverage acceleration devices on Serverless platforms; and iv) facilitate the deployment of applications and workflows through user interfaces. Additionally, several use cases have been created and adapted to assess the developments achieved.Risco Gallardo, S. (2023). Serverless Strategies and Tools in the Cloud Computing Continuum [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/202013Compendi

    Graduate Catalog of Studies, 2023-2024

    Get PDF

    Queuing analysis and optimization of public vehicle transport stations: A case of South West Ethiopia region vehicle stations

    Get PDF
    Modern urban environments present a dynamically growing field where, notwithstanding shared goals, several mutually conflicting interests frequently collide. However, it has a big impact on the city's socioeconomic standing, waiting lines and queues are common occurrences. This results in extremely long lines for vehicles and people on incongruous routes, service coagulation, customer murmuring, unhappiness, complaints, and looking for other options, sometimes illegally. The root cause is corruption, which leads to traffic jams, stops and packs vehicles beyond their safe carrying capacity, and violates passengers' human rights and freedoms. This study focused on optimizing the time passengers had to wait in public vehicle stations. This applied research employed both data-gathering sources and mixed approaches. Then, 166 samples of key informants of transport stations were taken using the Slovin sampling formula. The time vehicles, including the drivers and auxiliary drivers ‘Weyala', had to wait was also studied. To maximize the service level at vehicle stations, a queuing model was subsequently devised ‘Menaharya’. Time, cost, and quality encompass performance, scope, and suitability for the intended purposes. The study also focused on determining the minimal response time required for passengers and vehicles queuing to reach their ultimate destinations within the transportation stations in Tepi, Mizan, and Bonga. A new bus station system was modeled and simulated by Arena simulation software in the chosen study area. 84% improvement on cost reduced by 56.25%, time 4 hours to 1.5 hours, quality, safety and designed load performance calculations employed. Stakeholders are asked to implement the model and monitor the results obtained

    Coverage Performance Analysis of Reconfigurable Intelligent Surface-aided Millimeter Wave Network with Blockage Effect

    Get PDF
    In order to solve spectrum resource shortage and satisfy immense wireless data traffic demands, millimeter wave (mmWave) frequency with large available bandwidth has been proposed for wireless communication in 5G and beyond 5G. However, mmWave communications are susceptible to blockages. This characteristic limits the network performance. Meanwhile, reconfigurable intelligent surface (RIS) has been proposed to improve the propagation environment and extend the network coverage. Unlike traditional wireless technologies that improve transmission quality from transceivers, RISs enhance network performance by adjusting the propagation environment. One of the promising applications of RISs is to provide indirect line-of-sight (LoS) paths when the direct LoS path between transceivers does not exist. This application makes RIS particularly useful in mmWave communications. With effective RIS deployment, the mmWave RIS-aided network performance can be enhanced significantly. However, most existing works have analyzed RIS-aided network performance without exploiting the flexibility of RIS deployment and/or considering blockage effect, which leaves huge research gaps in RIS-aided networks. To fill the gaps, this thesis develops RIS-aided mmWave network models considering blockage effect under the stochastic geometry framework. Three scenarios, i.e., indoor, outdoor and outdoor-to-indoor (O2I) RIS-aided networks, are investigated. Firstly, LoS propagation is hard to be guaranteed in indoor environments since blockages are densely distributed. Deploying RISs to assist mmWave transmission is a promising way to overcome this challenge. In the first paper, we propose an indoor mmWave RIS-aided network model capturing the characteristics of indoor environments. With a given base station (BS) density, whether deploying RISs or increasing BS density to further enhance the network coverage is more cost-effective is investigated. We present a coverage calculation algorithm which can be adapted for different indoor layouts. Then, we jointly analyze the network cost and coverage probability. Our results indicate that deploying RISs with an appropriate number of BSs is more cost-effective for achieving an adequate coverage probability than increasing BSs only. Secondly, for a given total number of passive elements, whether fewer large-scale RISs or more small-scale RISs should be deployed has yet to be investigated in the presence of the blockage effect. In the second paper, we model and analyze a 3D outdoor mmWave RIS-aided network considering both building blockages and human-body blockages. Based on the proposed model, the analytical upper and lower bounds of the coverage probability are derived. Meanwhile, the closed-form coverage probability when RISs are much closer to the UE than the BS is derived. In terms of coverage enhancement, we reveal that sparsely deployed large-scale RISs outperform densely deployed small-scale RISs in scenarios of sparse blockages and/or long transmission distances, while densely deployed small-scale RISs win in scenarios of dense blockages and/or short transmission distances. Finally, building envelope (the exterior wall of a building) makes outdoor mmWave BS difficult to communicate with indoor UE. Transmissive RISs with passive elements have been proposed to refract the signal when the transmitter and receiver are on the different side of the RIS. Similar to reflective RISs, the passive elements of a transmissive RIS can implement phase shifts and adjust the amplitude of the incident signals. By deploying transmissive RISs on the building envelope, it is feasible to implement RIS-aided O2I mmWave networks. In the third paper, we develop a 3D RIS-aided O2I mmWave network model with random indoor blockages. Based on the model, a closed-form coverage probability approximation considering blockage spatial correlation is derived, and multiple-RIS deployment strategies are discussed. For a given total number of RIS passive elements, the impact of blockage density, the number and locations of RISs on the coverage probability is analyzed. All the analytical results have been validated by Monte Carlo simulation. The observations from the result analysis provide guidelines for the future deployment of RIS-aided mmWave networks

    Identifying the threshold to sustainable ridepooling

    Full text link
    Ridepooling combines trips of multiple passengers in the same vehicle and may thereby provide a more sustainable option than transport by private cars. The efficiency and sustainability of ridepooling is typically quantified by key performance indicators such as the average vehicle occupancy or the total distance driven by all ridepooling vehicles relative to individual transport. However, even if the average occupancy is high and rides are shared, ridepooling services may increase the total distance driven due to additional detours and deadheading. Moreover, these key performance indicators are difficult to predict without large-scale simulations or actual ridepooling operation. Here, we propose a dimensionless parameter to estimate the sustainability of ridepooling by quantifying the load on a ridepooling service, relating characteristic timescales of demand and supply. The load bounds the relative distance driven and uniquely marks the break-even point above which the total distance driven by all vehicles of a ridepooling service falls below that of motorized individual transport. Detailed event-based simulations and a comparison with empirical observations from a ridepooling pilot project in a rural area of Germany validate the theoretical prediction. Importantly, the load follows directly from a small set of aggregate parameters of the service setting and is thus predictable a priori. The load may thus complement standard key performance indicators and simplify planning, operation and evaluation of ridepooling services

    Auditable and performant Byzantine consensus for permissioned ledgers

    Get PDF
    Permissioned ledgers allow users to execute transactions against a data store, and retain proof of their execution in a replicated ledger. Each replica verifies the transactions’ execution and ensures that, in perpetuity, a committed transaction cannot be removed from the ledger. Unfortunately, this is not guaranteed by today’s permissioned ledgers, which can be re-written if an arbitrary number of replicas collude. In addition, the transaction throughput of permissioned ledgers is low, hampering real-world deployments, by not taking advantage of multi-core CPUs and hardware accelerators. This thesis explores how permissioned ledgers and their consensus protocols can be made auditable in perpetuity; even when all replicas collude and re-write the ledger. It also addresses how Byzantine consensus protocols can be changed to increase the execution throughput of complex transactions. This thesis makes the following contributions: 1. Always auditable Byzantine consensus protocols. We present a permissioned ledger system that can assign blame to individual replicas regardless of how many of them misbehave. This is achieved by signing and storing consensus protocol messages in the ledger and providing clients with signed, universally-verifiable receipts. 2. Performant transaction execution with hardware accelerators. Next, we describe a cloud-based ML inference service that provides strong integrity guarantees, while staying compatible with current inference APIs. We change the Byzantine consensus protocol to execute machine learning (ML) inference computation on GPUs to optimize throughput and latency of ML inference computation. 3. Parallel transactions execution on multi-core CPUs. Finally, we introduce a permissioned ledger that executes transactions, in parallel, on multi-core CPUs. We separate the execution of transactions between the primary and secondary replicas. The primary replica executes transactions on multiple CPU cores and creates a dependency graph of the transactions that the backup replicas utilize to execute transactions in parallel.Open Acces
    corecore