2,098 research outputs found

    Weighted Multi-Skill Resource Constrained Project Scheduling: A Greedy and Parallel Scheduling Approach

    Get PDF
    This study addresses the Weighted Multi-Skill Resource Constrained Project Scheduling Problem (W-MSRCSPSP) with the aim of minimizing software project makespan. Unlike previous works, our investigation regards heterogeneous resources characterized by varying skill proficiency levels. Another major problem with existing methodologies is the potential underutilization of human resources due to varying task durations. This work introduces an innovative scheduling approach known as the Greedy and Parallel Scheduling (GPS) algorithm to handle the said issues. GPS focuses on assigning the most suitable resources available to project activities at each scheduling point. The fundamental goal of our proposed approach is to reduce resource wastage while efficiently allocating surplus resources, if any, to project tasks, ultimately leading to a decrease in the makespan. To empirically evaluate the efficacy of the GPS algorithm, we conduct a comparative analysis against the Parallel Scheduling Scheme (PSS). The advantage of our proposed approach lies in its ability to optimize the utilization of available resources, resulting in accelerated project completion. Results from extensive simulations substantiate this claim, demonstrating that the GPS scheme outperforms the PSS approach in minimizing project duration

    Serverless Strategies and Tools in the Cloud Computing Continuum

    Full text link
    Tesis por compendio[ES] En los últimos años, la popularidad de la computación en nube ha permitido a los usuarios acceder a recursos de cómputo, red y almacenamiento sin precedentes bajo un modelo de pago por uso. Esta popularidad ha propiciado la aparición de nuevos servicios para resolver determinados problemas informáticos a gran escala y simplificar el desarrollo y el despliegue de aplicaciones. Entre los servicios más destacados en los últimos años se encuentran las plataformas FaaS (Función como Servicio), cuyo principal atractivo es la facilidad de despliegue de pequeños fragmentos de código en determinados lenguajes de programación para realizar tareas específicas en respuesta a eventos. Estas funciones son ejecutadas en los servidores del proveedor Cloud sin que los usuarios se preocupen de su mantenimiento ni de la gestión de su elasticidad, manteniendo siempre un modelo de pago por uso de grano fino. Las plataformas FaaS pertenecen al paradigma informático conocido como Serverless, cuyo propósito es abstraer la gestión de servidores por parte de los usuarios, permitiéndoles centrar sus esfuerzos únicamente en el desarrollo de aplicaciones. El problema del modelo FaaS es que está enfocado principalmente en microservicios y tiende a tener limitaciones en el tiempo de ejecución y en las capacidades de computación (por ejemplo, carece de soporte para hardware de aceleración como GPUs). Sin embargo, se ha demostrado que la capacidad de autoaprovisionamiento y el alto grado de paralelismo de estos servicios pueden ser muy adecuados para una mayor variedad de aplicaciones. Además, su inherente ejecución dirigida por eventos hace que las funciones sean perfectamente adecuadas para ser definidas como pasos en flujos de trabajo de procesamiento de archivos (por ejemplo, flujos de trabajo de computación científica). Por otra parte, el auge de los dispositivos inteligentes e integrados (IoT), las innovaciones en las redes de comunicación y la necesidad de reducir la latencia en casos de uso complejos han dado lugar al concepto de Edge computing, o computación en el borde. El Edge computing consiste en el procesamiento en dispositivos cercanos a las fuentes de datos para mejorar los tiempos de respuesta. La combinación de este paradigma con la computación en nube, formando arquitecturas con dispositivos a distintos niveles en función de su proximidad a la fuente y su capacidad de cómputo, se ha acuñado como continuo de la computación en la nube (o continuo computacional). Esta tesis doctoral pretende, por lo tanto, aplicar diferentes estrategias Serverless para permitir el despliegue de aplicaciones generalistas, empaquetadas en contenedores de software, a través de los diferentes niveles del continuo computacional. Para ello, se han desarrollado múltiples herramientas con el fin de: i) adaptar servicios FaaS de proveedores Cloud públicos; ii) integrar diferentes componentes software para definir una plataforma Serverless en infraestructuras privadas y en el borde; iii) aprovechar dispositivos de aceleración en plataformas Serverless; y iv) facilitar el despliegue de aplicaciones y flujos de trabajo a través de interfaces de usuario. Además, se han creado y adaptado varios casos de uso para evaluar los desarrollos conseguidos.[CA] En els últims anys, la popularitat de la computació al núvol ha permès als usuaris accedir a recursos de còmput, xarxa i emmagatzematge sense precedents sota un model de pagament per ús. Aquesta popularitat ha propiciat l'aparició de nous serveis per resoldre determinats problemes informàtics a gran escala i simplificar el desenvolupament i desplegament d'aplicacions. Entre els serveis més destacats en els darrers anys hi ha les plataformes FaaS (Funcions com a Servei), el principal atractiu de les quals és la facilitat de desplegament de petits fragments de codi en determinats llenguatges de programació per realitzar tasques específiques en resposta a esdeveniments. Aquestes funcions són executades als servidors del proveïdor Cloud sense que els usuaris es preocupen del seu manteniment ni de la gestió de la seva elasticitat, mantenint sempre un model de pagament per ús de gra fi. Les plataformes FaaS pertanyen al paradigma informàtic conegut com a Serverless, el propòsit del qual és abstraure la gestió de servidors per part dels usuaris, permetent centrar els seus esforços únicament en el desenvolupament d'aplicacions. El problema del model FaaS és que està enfocat principalment a microserveis i tendeix a tenir limitacions en el temps d'execució i en les capacitats de computació (per exemple, no té suport per a maquinari d'acceleració com GPU). Tot i això, s'ha demostrat que la capacitat d'autoaprovisionament i l'alt grau de paral·lelisme d'aquests serveis poden ser molt adequats per a més aplicacions. A més, la seva inherent execució dirigida per esdeveniments fa que les funcions siguen perfectament adequades per ser definides com a passos en fluxos de treball de processament d'arxius (per exemple, fluxos de treball de computació científica). D'altra banda, l'auge dels dispositius intel·ligents i integrats (IoT), les innovacions a les xarxes de comunicació i la necessitat de reduir la latència en casos d'ús complexos han donat lloc al concepte d'Edge computing, o computació a la vora. L'Edge computing consisteix en el processament en dispositius propers a les fonts de dades per millorar els temps de resposta. La combinació d'aquest paradigma amb la computació en núvol, formant arquitectures amb dispositius a diferents nivells en funció de la proximitat a la font i la capacitat de còmput, s'ha encunyat com a continu de la computació al núvol (o continu computacional). Aquesta tesi doctoral pretén, doncs, aplicar diferents estratègies Serverless per permetre el desplegament d'aplicacions generalistes, empaquetades en contenidors de programari, a través dels diferents nivells del continu computacional. Per això, s'han desenvolupat múltiples eines per tal de: i) adaptar serveis FaaS de proveïdors Cloud públics; ii) integrar diferents components de programari per definir una plataforma Serverless en infraestructures privades i a la vora; iii) aprofitar dispositius d'acceleració a plataformes Serverless; i iv) facilitar el desplegament d'aplicacions i fluxos de treball mitjançant interfícies d'usuari. A més, s'han creat i s'han adaptat diversos casos d'ús per avaluar els desenvolupaments aconseguits.[EN] In recent years, the popularity of Cloud computing has allowed users to access unprecedented compute, network, and storage resources under a pay-per-use model. This popularity led to new services to solve specific large-scale computing challenges and simplify the development and deployment of applications. Among the most prominent services in recent years are FaaS (Function as a Service) platforms, whose primary appeal is the ease of deploying small pieces of code in certain programming languages to perform specific tasks on an event-driven basis. These functions are executed on the Cloud provider's servers without users worrying about their maintenance or elasticity management, always keeping a fine-grained pay-per-use model. FaaS platforms belong to the computing paradigm known as Serverless, which aims to abstract the management of servers from the users, allowing them to focus their efforts solely on the development of applications. The problem with FaaS is that it focuses on microservices and tends to have limitations regarding the execution time and the computing capabilities (e.g. lack of support for acceleration hardware such as GPUs). However, it has been demonstrated that the self-provisioning capability and high degree of parallelism of these services can be well suited to broader applications. In addition, their inherent event-driven triggering makes functions perfectly suitable to be defined as steps in file processing workflows (e.g. scientific computing workflows). Furthermore, the rise of smart and embedded devices (IoT), innovations in communication networks and the need to reduce latency in challenging use cases have led to the concept of Edge computing. Edge computing consists of conducting the processing on devices close to the data sources to improve response times. The coupling of this paradigm together with Cloud computing, involving architectures with devices at different levels depending on their proximity to the source and their compute capability, has been coined as Cloud Computing Continuum (or Computing Continuum). Therefore, this PhD thesis aims to apply different Serverless strategies to enable the deployment of generalist applications, packaged in software containers, across the different tiers of the Cloud Computing Continuum. To this end, multiple tools have been developed in order to: i) adapt FaaS services from public Cloud providers; ii) integrate different software components to define a Serverless platform on on-premises and Edge infrastructures; iii) leverage acceleration devices on Serverless platforms; and iv) facilitate the deployment of applications and workflows through user interfaces. Additionally, several use cases have been created and adapted to assess the developments achieved.Risco Gallardo, S. (2023). Serverless Strategies and Tools in the Cloud Computing Continuum [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/202013Compendi

    Effects of the COVID-19 Pandemic on Manufacturing Companies' Supply Chain Management in Finland

    Get PDF
    The COVID-19 outbreak shocked the whole world in 2020. As the pandemic quickly spread across the globe, only during its first year, over 75 million positive cases and 1,6 million deaths were reported worldwide, and in November 2022, the same numbers were over 634 million and 6,6 million. The world’s economic system and global markets were greatly affected, and many countries tried to counter the pandemic’s spread by implementing strict lockdowns, which further caused turbulence on the markets. Countless manufacturing companies across the globe were faced with massive global supply chain disruptions, and they were felt even in companies operating in Finland. Managers and scholars alike have been working very hard for the past three years to find out what were (and are) the best countermeasures to combat the pandemic’s effects and disruptions, but a consensus of an answer is still missing. This thesis aims to investigate the effects of the COVID-19 pandemic on manufacturing companies operating in Finland, and to examine what kind of ways or methods these companies adopted to counter the COVID-19 pandemic in Finland compared to the rest of the world. These two topics form the two main research questions of this thesis, and they are answered from the basis of a qualitative systematic literature review and a mostly qualitative semi-structured interview study to which interviewees from six different manufacturing companies take part in. The literature review consists of supply chain management theory and a look into the effects of the COVID-19 pandemic on manufacturing companies operating outside of Finland. The literature review is also used to build a theoretical framework, which is used in the end to analyse the results of the interview study and compare them to the findings of the literature review. The results of this thesis offer insight into the differences of the COVID-19 pandemic’s impacts on the supply chain management of manufacturing companies operating in Finland and outside of Finland, and the different supply chain management related countermeasures taken by these companies. From the literature review, it was discovered that global supply shortages, large-scale fluctuations in demand, consumption shocks, and increases in material prices and lead times were some of the most recognizable effects of the COVID-19 pandemic on the manufacturing companies operating outside of Finland, often affecting directly to their operations and Tier 1 suppliers. The interview results reflected similar results, only the companies operating in Finland mostly experienced the pandemic’s effects through their suppliers’ suppliers’ problems, which were usually operating outside of Finland. Also, the companies that were located in countries that went into lockdowns had their own challenges as well. To counter the global supply chain disruptions, both the interview study and the literature review provided similar findings: the realization of the necessity of evolving the existing supply chain management from lean thinking to a more agile and resilient system became evident for those that hadn’t already done so. Differences in the ways of attempting to accomplish this were found, but the goal was still very similar for most of the companies

    Adjustable robust optimization with nonlinear recourses

    Get PDF
    Over the last century, mathematical optimization has become a prominent tool for decision making. Its systematic application in practical fields such as economics, logistics or defense led to the development of algorithmic methods with ever increasing efficiency. Indeed, for a variety of real-world problems, finding an optimal decision among a set of (implicitly or explicitly) predefined alternatives has become conceivable in reasonable time. In the last decades, however, the research community raised more and more attention to the role of uncertainty in the optimization process. In particular, one may question the notion of optimality, and even feasibility, when studying decision problems with unknown or imprecise input parameters. This concern is even more critical in a world becoming more and more complex —by which we intend, interconnected —where each individual variation inside a system inevitably causes other variations in the system itself. In this dissertation, we study a class of optimization problems which suffer from imprecise input data and feature a two-stage decision process, i.e., where decisions are made in a sequential order —called stages —and where unknown parameters are revealed throughout the stages. The applications of such problems are plethora in practical fields such as, e.g., facility location problems with uncertain demands, transportation problems with uncertain costs or scheduling under uncertain processing times. The uncertainty is dealt with a robust optimization (RO) viewpoint (also known as "worst-case perspective") and we present original contributions to the RO literature on both the theoretical and practical side

    Closing the gap: The role of distributed manufacturing systems for overcoming the barriers to manufacturing sustainability

    Get PDF
    The demand for distributed manufacturing systems (DMS) in the manufacturing sector has notably gained vast popularity as a suitable choice to accomplish sustainability benefits. Manufacturing companies are bound to face critical barriers in their pursuit of sustainability goals. However, the extent to which the DMS attributes relate to sustainable performance and impact critical barriers to sustainability is considerably unknown. To help close this gap, this article proposes a methodology to determine the relative importance of sustainability barriers, the influence of DMS on these barriers, and the relationship between DMS attributes and sustainable performance. Drawing upon a rich data pool from the Chinese manufacturing industry, the best–worst method is used to investigate the relative importance of the sustainability barriers and determine how the DMS attributes influence these barriers and relate to sustainability. The study findings show that “organizational barriers” are the most severe barriers and indicate that “reduced carbon emissions” has the highest impact on “organizational” and “sociocultural barriers” whereas public approval” has the highest impact on “organizational barriers.” The results infer that “reduction of carbon emission” is the DMS strategy strongly linked to improved sustainable performance. Hence, the results can offer in-depth insight to decision-makers, practitioners, and regulatory bodies on the criticality of the barriers and the influence of DMS attributes on the sustainability barriers, and thus, improve sustainable performance for increased global competitiveness. Moreover, our study offers a solid foundation for further studies on the link between DMS and sustainable performance

    Creating shared value:An operations and supply chain management perspective

    Get PDF
    Focusing solely on short-term profits has caused social, environmental, and economic problems. Creating shared value integrates profitability with social and environmental objectives, offering a holistic solution. This dissertation examines two areas where this integration is crucial. The first topic explores servicizing business models for a transition to a more circular economy, emphasizing environmental benefits and firm profitability. Initially, we focus on pricing policies, comparing pricing schemes across consumer segments to identify win-win-win strategies that meet all people, planet, and profit objectives. Our research reveals that pay-per-use schemes outperform pay-per-period schemes for cost-inefficient or small-scale providers. A win-win (profit and planet) strategy can be achieved by offering a pay-per-use policy to high usage-valuation consumers, but a win-win-win strategy is unattainable. We then investigate consumer choices in servicizing models by conducting a conjoint experiment on payment scheme, price, minimum contract duration, and entry label attributes. The payment scheme emerges as the most influential attribute, with purchasing and pay-per-use schemes being popular options. The second topic focuses on drug shortages. Specifically, we examine the impact of tendering on shortages. Our findings demonstrate that tendering reduces prices but increases shortages, particularly at the beginning of contracts. However, shortages are less severe when alternative suppliers are available, and the market is less concentrated. To address this issue, we propose allowing multiple winners, regionalizing tenders, increasing the time between tender and contract initiation, and incorporating a reliability measure as a winning criterion to mitigate shortages

    Multi-objective resource optimization in space-aerial-ground-sea integrated networks

    Get PDF
    Space-air-ground-sea integrated (SAGSI) networks are envisioned to connect satellite, aerial, ground, and sea networks to provide connectivity everywhere and all the time in sixth-generation (6G) networks. However, the success of SAGSI networks is constrained by several challenges including resource optimization when the users have diverse requirements and applications. We present a comprehensive review of SAGSI networks from a resource optimization perspective. We discuss use case scenarios and possible applications of SAGSI networks. The resource optimization discussion considers the challenges associated with SAGSI networks. In our review, we categorized resource optimization techniques based on throughput and capacity maximization, delay minimization, energy consumption, task offloading, task scheduling, resource allocation or utilization, network operation cost, outage probability, and the average age of information, joint optimization (data rate difference, storage or caching, CPU cycle frequency), the overall performance of network and performance degradation, software-defined networking, and intelligent surveillance and relay communication. We then formulate a mathematical framework for maximizing energy efficiency, resource utilization, and user association. We optimize user association while satisfying the constraints of transmit power, data rate, and user association with priority. The binary decision variable is used to associate users with system resources. Since the decision variable is binary and constraints are linear, the formulated problem is a binary linear programming problem. Based on our formulated framework, we simulate and analyze the performance of three different algorithms (branch and bound algorithm, interior point method, and barrier simplex algorithm) and compare the results. Simulation results show that the branch and bound algorithm shows the best results, so this is our benchmark algorithm. The complexity of branch and bound increases exponentially as the number of users and stations increases in the SAGSI network. We got comparable results for the interior point method and barrier simplex algorithm to the benchmark algorithm with low complexity. Finally, we discuss future research directions and challenges of resource optimization in SAGSI networks

    Demand Prediction and Inventory Management of Surgical Supplies

    Get PDF
    Effective supply chain management is critical to operations in various industries, including healthcare. Demand prediction and inventory management are essential parts of healthcare supply chain management for ensuring optimal patient outcomes, controlling costs, and minimizing waste. The advances in data analytics and technology have enabled many sophisticated approaches to demand forecasting and inventory control. This study aims to leverage these advancements to accurately predict demand and manage the inventory of surgical supplies to reduce costs and provide better services to patients. In order to achieve this objective, a Long Short-Term Memory (LSTM) model is developed to predict the demand for commonly used surgical supplies. Moreover, the volume of scheduled surgeries influences the demand for certain surgical supplies. Hence, another LSTM model is adopted from the literature to forecast surgical case volumes and predict the procedure-specific surgical supplies. A few new features are incorporated into the adopted model to account for the variations in the surgical case volumes caused by COVID-19 in 2020. This study then develops a multi-item capacitated dynamic lot-sizing replenishment model using Mixed Integer Programming (MIP). However, forecasting is always considered inaccurate, and demand is hardly deterministic in the real world. Therefore, a Two-Stage Stochastic Programming (TSSP) model is developed to address these issues. Experimental results demonstrate that the TSSP model provides an additional benefit of $2,328.304 over the MIP model
    corecore