828 research outputs found
Serverless Strategies and Tools in the Cloud Computing Continuum
Tesis por compendio[ES] En los últimos años, la popularidad de la computación en nube ha permitido a los usuarios acceder a recursos de cómputo, red y almacenamiento sin precedentes bajo un modelo de pago por uso. Esta popularidad ha propiciado la aparición de nuevos servicios para resolver determinados problemas informáticos a gran escala y simplificar el desarrollo y el despliegue de aplicaciones. Entre los servicios más destacados en los últimos años se encuentran las plataformas FaaS (Función como Servicio), cuyo principal atractivo es la facilidad de despliegue de pequeños fragmentos de código en determinados lenguajes de programación para realizar tareas específicas en respuesta a eventos. Estas funciones son ejecutadas en los servidores del proveedor Cloud sin que los usuarios se preocupen de su mantenimiento ni de la gestión de su elasticidad, manteniendo siempre un modelo de pago por uso de grano fino.
Las plataformas FaaS pertenecen al paradigma informático conocido como Serverless, cuyo propósito es abstraer la gestión de servidores por parte de los usuarios, permitiéndoles centrar sus esfuerzos únicamente en el desarrollo de aplicaciones. El problema del modelo FaaS es que está enfocado principalmente en microservicios y tiende a tener limitaciones en el tiempo de ejecución y en las capacidades de computación (por ejemplo, carece de soporte para hardware de aceleración como GPUs). Sin embargo, se ha demostrado que la capacidad de autoaprovisionamiento y el alto grado de paralelismo de estos servicios pueden ser muy adecuados para una mayor variedad de aplicaciones. Además, su inherente ejecución dirigida por eventos hace que las funciones sean perfectamente adecuadas para ser definidas como pasos en flujos de trabajo de procesamiento de archivos (por ejemplo, flujos de trabajo de computación científica).
Por otra parte, el auge de los dispositivos inteligentes e integrados (IoT), las innovaciones en las redes de comunicación y la necesidad de reducir la latencia en casos de uso complejos han dado lugar al concepto de Edge computing, o computación en el borde. El Edge computing consiste en el procesamiento en dispositivos cercanos a las fuentes de datos para mejorar los tiempos de respuesta. La combinación de este paradigma con la computación en nube, formando arquitecturas con dispositivos a distintos niveles en función de su proximidad a la fuente y su capacidad de cómputo, se ha acuñado como continuo de la computación en la nube (o continuo computacional).
Esta tesis doctoral pretende, por lo tanto, aplicar diferentes estrategias Serverless para permitir el despliegue de aplicaciones generalistas, empaquetadas en contenedores de software, a través de los diferentes niveles del continuo computacional. Para ello, se han desarrollado múltiples herramientas con el fin de: i) adaptar servicios FaaS de proveedores Cloud públicos; ii) integrar diferentes componentes software para definir una plataforma Serverless en infraestructuras privadas y en el borde; iii) aprovechar dispositivos de aceleración en plataformas Serverless; y iv) facilitar el despliegue de aplicaciones y flujos de trabajo a través de interfaces de usuario. Además, se han creado y adaptado varios casos de uso para evaluar los desarrollos conseguidos.[CA] En els últims anys, la popularitat de la computació al núvol ha permès als usuaris accedir a recursos de còmput, xarxa i emmagatzematge sense precedents sota un model de pagament per ús. Aquesta popularitat ha propiciat l'aparició de nous serveis per resoldre determinats problemes informàtics a gran escala i simplificar el desenvolupament i desplegament d'aplicacions. Entre els serveis més destacats en els darrers anys hi ha les plataformes FaaS (Funcions com a Servei), el principal atractiu de les quals és la facilitat de desplegament de petits fragments de codi en determinats llenguatges de programació per realitzar tasques específiques en resposta a esdeveniments. Aquestes funcions són executades als servidors del proveïdor Cloud sense que els usuaris es preocupen del seu manteniment ni de la gestió de la seva elasticitat, mantenint sempre un model de pagament per ús de gra fi.
Les plataformes FaaS pertanyen al paradigma informàtic conegut com a Serverless, el propòsit del qual és abstraure la gestió de servidors per part dels usuaris, permetent centrar els seus esforços únicament en el desenvolupament d'aplicacions. El problema del model FaaS és que està enfocat principalment a microserveis i tendeix a tenir limitacions en el temps d'execució i en les capacitats de computació (per exemple, no té suport per a maquinari d'acceleració com GPU). Tot i això, s'ha demostrat que la capacitat d'autoaprovisionament i l'alt grau de paral·lelisme d'aquests serveis poden ser molt adequats per a més aplicacions. A més, la seva inherent execució dirigida per esdeveniments fa que les funcions siguen perfectament adequades per ser definides com a passos en fluxos de treball de processament d'arxius (per exemple, fluxos de treball de computació científica).
D'altra banda, l'auge dels dispositius intel·ligents i integrats (IoT), les innovacions a les xarxes de comunicació i la necessitat de reduir la latència en casos d'ús complexos han donat lloc al concepte d'Edge computing, o computació a la vora. L'Edge computing consisteix en el processament en dispositius propers a les fonts de dades per millorar els temps de resposta. La combinació d'aquest paradigma amb la computació en núvol, formant arquitectures amb dispositius a diferents nivells en funció de la proximitat a la font i la capacitat de còmput, s'ha encunyat com a continu de la computació al núvol (o continu computacional).
Aquesta tesi doctoral pretén, doncs, aplicar diferents estratègies Serverless per permetre el desplegament d'aplicacions generalistes, empaquetades en contenidors de programari, a través dels diferents nivells del continu computacional. Per això, s'han desenvolupat múltiples eines per tal de: i) adaptar serveis FaaS de proveïdors Cloud públics; ii) integrar diferents components de programari per definir una plataforma Serverless en infraestructures privades i a la vora; iii) aprofitar dispositius d'acceleració a plataformes Serverless; i iv) facilitar el desplegament d'aplicacions i fluxos de treball mitjançant interfícies d'usuari. A més, s'han creat i s'han adaptat diversos casos d'ús per avaluar els desenvolupaments aconseguits.[EN] In recent years, the popularity of Cloud computing has allowed users to access unprecedented compute, network, and storage resources under a pay-per-use model. This popularity led to new services to solve specific large-scale computing challenges and simplify the development and deployment of applications. Among the most prominent services in recent years are FaaS (Function as a Service) platforms, whose primary appeal is the ease of deploying small pieces of code in certain programming languages to perform specific tasks on an event-driven basis. These functions are executed on the Cloud provider's servers without users worrying about their maintenance or elasticity management, always keeping a fine-grained pay-per-use model.
FaaS platforms belong to the computing paradigm known as Serverless, which aims to abstract the management of servers from the users, allowing them to focus their efforts solely on the development of applications. The problem with FaaS is that it focuses on microservices and tends to have limitations regarding the execution time and the computing capabilities (e.g. lack of support for acceleration hardware such as GPUs). However, it has been demonstrated that the self-provisioning capability and high degree of parallelism of these services can be well suited to broader applications. In addition, their inherent event-driven triggering makes functions perfectly suitable to be defined as steps in file processing workflows (e.g. scientific computing workflows).
Furthermore, the rise of smart and embedded devices (IoT), innovations in communication networks and the need to reduce latency in challenging use cases have led to the concept of Edge computing. Edge computing consists of conducting the processing on devices close to the data sources to improve response times. The coupling of this paradigm together with Cloud computing, involving architectures with devices at different levels depending on their proximity to the source and their compute capability, has been coined as Cloud Computing Continuum (or Computing Continuum).
Therefore, this PhD thesis aims to apply different Serverless strategies to enable the deployment of generalist applications, packaged in software containers, across the different tiers of the Cloud Computing Continuum. To this end, multiple tools have been developed in order to: i) adapt FaaS services from public Cloud providers; ii) integrate different software components to define a Serverless platform on on-premises and Edge infrastructures; iii) leverage acceleration devices on Serverless platforms; and iv) facilitate the deployment of applications and workflows through user interfaces. Additionally, several use cases have been created and adapted to assess the developments achieved.Risco Gallardo, S. (2023). Serverless Strategies and Tools in the Cloud Computing Continuum [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/202013Compendi
Driving venture capital funding efficiencies through data driven models. Why is this important and what are its implications for the startup ecosystem?
This thesis aims to test whether data models can fit the venture capital funding process better, and if they do fit, can they help improve the venture capital funding efficiency?
Based on the reported results, venture capitalists can only see returns in 20% of their investments. The thesis argues that it is essential to help venture capital investment as it can help drive economic growth through investments in innovation.
The thesis considers four startup scenarios and the related investment factors. The scenarios are a funded artificial intelligence startup seeking follow-on funding, a new startup seeking first funding, the survivability of a sustainability-focused startup, and the importance of patents for exit. Patents are a proxy for innovation in this thesis.
Through quantitative analysis using generalized linear models, logit regressions, and t-tests, the thesis can establish that data models can identify the relative significance of funding factors. Once the factor significance is established, it can be deployed in a model. Building the machine learning model has been considered outside the scope of this thesis.
A mix of academic and real-world research has been used for the data analysis of this thesis. Accelerators and venture capitalists also used some of the results to improve their own processes. Many of the models have shifted from a prediction to factor significance.
This thesis implies that it could help venture capitalists plan for a 10% efficiency improvement. From an academic perspective, this study focuses on the entire life of a startup, from the first funding stage to the exit. It also links the startup ecosystem with economic development. Two additional factors from the study are the regional perspective of funding differences between Asia, Europe, and the US and that this study would include the recent economic sentiment. The impact of the funding slowdown has been measured through a focus on first funding and longitudinal validations of the data decision before the slowdown.
Based on the results of the thesis, data models are a credible alternative and show significant correlations between returns and factors. It is advisable for a venture capitalist to consider these
Recommended from our members
Sonic heritage: listening to the past
History is so often told through objects, images and photographs, but the potential of sounds to reveal place and space is often neglected. Our research project ‘Sonic Palimpsest’1 explores the potential of sound to evoke impressions and new understandings of the past, to embrace the sonic as a tool to understand what was, in a way that can complement and add to our predominant visual understandings. Our work includes the expansion of the Oral History archives held at Chatham Dockyard to include women’s voices and experiences, and the creation of sonic works to engage the public with their heritage. Our research highlights the social and cultural value of oral history and field recordings in the transmission of knowledge to both researchers and the public. Together these recordings document how buildings and spaces within the dockyard were used and experienced by those who worked there. We can begin to understand the social and cultural roles of these buildings within the community, both past and present
Metaverse. Old urban issues in new virtual cities
Recent years have seen the arise of some early attempts to build virtual cities,
utopias or affective dystopias in an embodied Internet, which in some respects appear to
be the ultimate expression of the neoliberal city paradigma (even if virtual). Although
there is an extensive disciplinary literature on the relationship between planning and
virtual or augmented reality linked mainly to the gaming industry, this often avoids design
and value issues. The observation of some of these early experiences - Decentraland,
Minecraft, Liberland Metaverse, to name a few - poses important questions and problems
that are gradually becoming inescapable for designers and urban planners, and allows
us to make some partial considerations on the risks and potentialities of these early virtual
cities
Demand Response in Smart Grids
The Special Issue “Demand Response in Smart Grids” includes 11 papers on a variety of topics. The success of this Special Issue demonstrates the relevance of demand response programs and events in the operation of power and energy systems at both the distribution level and at the wide power system level. This reprint addresses the design, implementation, and operation of demand response programs, with focus on methods and techniques to achieve an optimized operation as well as on the electricity consumer
Recommended from our members
Proceedings of the 33rd Annual Workshop of the Psychology of Programming Interest Group
This is the Proceedings of the 33rd Annual Workshop of the Psychology of Programming Interest Group (PPIG). This was the first PPIG to be held physically since 2019, following the two online-only PPIGs in 2020 and 2021, both during the Covid pandemic. It was also the first PPIG conference to be designed specifically for hybrid attendance. Reflecting the theme, it was hosted by Music Computing Lab at the Open University in Milton Keynes
Sustainable Value Co-Creation in Welfare Service Ecosystems : Transforming temporary collaboration projects into permanent resource integration
The aim of this paper is to discuss the unexploited forces of user-orientation and shared responsibility to promote sustainable value co-creation during service innovation projects in welfare service ecosystems. The framework is based on the theoretical field of public service logic (PSL) and our thesis is that service innovation seriously requires a user-oriented approach, and that such an approach enables resource integration based on the service-user’s needs and lifeworld. In our findings, we identify prerequisites and opportunities of collaborative service innovation projects in order to transform these projects into sustainable resource integration once they have ended
LASSO – an observatorium for the dynamic selection, analysis and comparison of software
Mining software repositories at the scale of 'big code' (i.e., big data) is a challenging activity. As well as finding a suitable software corpus and making it programmatically accessible through an index or database, researchers and practitioners have to establish an efficient analysis infrastructure and precisely define the metrics and data extraction approaches to be applied. Moreover, for analysis results to be generalisable, these tasks have to be applied at a large enough scale to have statistical significance, and if they are to be repeatable, the artefacts need to be carefully maintained and curated over time. Today, however, a lot of this work is still performed by human beings on a case-by-case basis, with the level of effort involved often having a significant negative impact on the generalisability and repeatability of studies, and thus on their overall scientific value.
The general purpose, 'code mining' repositories and infrastructures that have emerged in recent years represent a significant step forward because they automate many software mining tasks at an ultra-large scale and allow researchers and practitioners to focus on defining the questions they would like to explore at an abstract level. However, they are currently limited to static analysis and data extraction techniques, and thus cannot support (i.e., help automate) any studies which involve the execution of software systems. This includes experimental validations of techniques and tools that hypothesise about the behaviour (i.e., semantics) of software, or data analysis and extraction techniques that aim to measure dynamic properties of software.
In this thesis a platform called LASSO (Large-Scale Software Observatorium) is introduced that overcomes this limitation by automating the collection of dynamic (i.e., execution-based) information about software alongside static information. It features a single, ultra-large scale corpus of executable software systems created by amalgamating existing Open Source software repositories and a dedicated DSL for defining abstract selection and analysis pipelines. Its key innovations are integrated capabilities for searching for selecting software systems based on their exhibited behaviour and an 'arena' that allows their responses to software tests to be compared in a purely data-driven way. We call the platform a 'software observatorium' since it is a place where the behaviour of large numbers of software systems can be observed, analysed and compared
METROPOLITAN ENCHANTMENT AND DISENCHANTMENT. METROPOLITAN ANTHROPOLOGY FOR THE CONTEMPORARY LIVING MAP CONSTRUCTION
We can no longer interpret the contemporary metropolis as we did in the last century. The thought of civil economy regarding the contemporary Metropolis conflicts more or less radically with the merely acquisitive dimension of the behaviour of its citizens. What is needed is therefore a new capacity for
imagining the economic-productive future of the city: hybrid social enterprises, economically sustainable, structured and capable of using technologies, could be a solution for producing value and distributing it fairly and inclusively.
Metropolitan Urbanity is another issue to establish. Metropolis needs new spaces where inclusion can occur, and where a repository of the imagery can be recreated. What is the ontology behind the technique of metropolitan planning and management, its vision and its symbols? Competitiveness,
speed, and meritocracy are political words, not technical ones. Metropolitan Urbanity is the characteristic of a polis that expresses itself in its public places. Today, however, public places are private ones that are destined for public use. The Common Good has always had a space of representation in the city, which was the public space. Today, the Green-Grey Infrastructure is the metropolitan city's monument that communicates a value for future generations and must therefore be recognised and imagined; it is the production of the metropolitan symbolic imagery, the new magic of the city
Changing Priorities. 3rd VIBRArch
In order to warrant a good present and future for people around the planet and to safe the care of the planet itself, research in architecture has to release all its potential. Therefore, the aims of the 3rd Valencia International Biennial of Research in Architecture are:
- To focus on the most relevant needs of humanity and the planet and what architectural research can do for solving them.
- To assess the evolution of architectural research in traditionally matters of interest and the current state of these popular and widespread topics.
- To deepen in the current state and findings of architectural research on subjects akin to post-capitalism and frequently related to equal opportunities and the universal right to personal development and happiness.
- To showcase all kinds of research related to the new and holistic concept of sustainability and to climate emergency.
- To place in the spotlight those ongoing works or available proposals developed by architectural researchers in order to combat the effects of the COVID-19 pandemic.
- To underline the capacity of architectural research to develop resiliency and abilities to adapt itself to changing priorities.
- To highlight architecture's multidisciplinarity as a melting pot of multiple approaches, points of view and expertise.
- To open new perspectives for architectural research by promoting the development of multidisciplinary and inter-university networks and research groups.
For all that, the 3rd Valencia International Biennial of Research in Architecture is open not only to architects, but also for any academic, practitioner, professional or student with a determination to develop research in architecture or neighboring fields.Cabrera Fausto, I. (2023). Changing Priorities. 3rd VIBRArch. Editorial Universitat Politècnica de València. https://doi.org/10.4995/VIBRArch2022.2022.1686
- …