4,370 research outputs found

    A digital twin (DT) approach to narrow-band Internet of things (NB-IoT) wireless communication optimization in an industrial scenario

    Get PDF
    The pervasive realization of virtual replication of physical entities termed Digital Twin (DT) has been utilized in this paper to optimize the wireless communication of the Narrowband Internet of Things (NB-IoT) in an industrial scenario. This optimization is exclusively achieved through DT approach. NB-IoT is a Low-Powered Wide Area Network (LPWAN) standardized by 3GPP and leverages Long Term Evolution (LTE) technology. The Amplify-and-Forward (AF) optimization technique is used to improve the performance of some notably poor-performing terminals in the scenario. Bit-Error-Rate (BER) tests show the terminals’ overall performance before and after optimization. An improvement of 17% is achieved in BER. The signal quality of the channels is analyzed as well as the Cumulative Distribution Function (CDF) is used to showcase the effective throughput performance of the NB-IoT terminals

    The concepts of Smart cities, Smart Tourism Destination and Smart Tourism Cities and their interrelationship

    Get PDF
    Because of the dramatic urbanization processes and increasing number of the population, cities are required to develop complex strategies and innovative plans for their future. Advancing technologies are causing the transformation of cities into smart cities and the recent trend of tourism research shows the potential relationship of smart cities with tourism. In this article, the content of the concepts of smartness, smart tourism destination (STD), smart city, smart tourism cities, their interdependence and importance are studied. Furthermore, the purpose of this study is to explore what STDs provide for tourists and the chances that smart cities offer for local people, analysing the potential benefits of STDs for tourists, stakeholders and destinations, and their importance in urban development based on current scholar research

    Reliable indoor optical wireless communication in the presence of fixed and random blockers

    Get PDF
    The advanced innovation of smartphones has led to the exponential growth of internet users which is expected to reach 71% of the global population by the end of 2027. This in turn has given rise to the demand for wireless data and internet devices that is capable of providing energy-efficient, reliable data transmission and high-speed wireless data services. Light-fidelity (LiFi), known as one of the optical wireless communication (OWC) technology is envisioned as a promising solution to accommodate these demands. However, the indoor LiFi channel is highly environment-dependent which can be influenced by several crucial factors (e.g., presence of people, furniture, random users' device orientation and the limited field of view (FOV) of optical receivers) which may contribute to the blockage of the line-of-sight (LOS) link. In this thesis, it is investigated whether deep learning (DL) techniques can effectively learn the distinct features of the indoor LiFi environment in order to provide superior performance compared to the conventional channel estimation techniques (e.g., minimum mean square error (MMSE) and least squares (LS)). This performance can be seen particularly when access to real-time channel state information (CSI) is restricted and is achieved with the cost of collecting large and meaningful data to train the DL neural networks and the training time which was conducted offline. Two DL-based schemes are designed for signal detection and resource allocation where it is shown that the proposed methods were able to offer close performance to the optimal conventional schemes and demonstrate substantial gain in terms of bit-error ratio (BER) and throughput especially in a more realistic or complex indoor environment. Performance analysis of LiFi networks under the influence of fixed and random blockers is essential and efficient solutions capable of diminishing the blockage effect is required. In this thesis, a CSI acquisition technique for a reconfigurable intelligent surface (RIS)-aided LiFi network is proposed to significantly reduce the dimension of the decision variables required for RIS beamforming. Furthermore, it is shown that several RIS attributes such as shape, size, height and distribution play important roles in increasing the network performance. Finally, the performance analysis for an RIS-aided realistic indoor LiFi network are presented. The proposed RIS configuration shows outstanding performances in reducing the network outage probability under the effect of blockages, random device orientation, limited receiver's FOV, furniture and user behavior. Establishing a LOS link that achieves uninterrupted wireless connectivity in a realistic indoor environment can be challenging. In this thesis, an analysis of link blockage is presented for an indoor LiFi system considering fixed and random blockers. In particular, novel analytical framework of the coverage probability for a single source and multi-source are derived. Using the proposed analytical framework, link blockages of the indoor LiFi network are carefully investigated and it is shown that the incorporation of multiple sources and RIS can significantly reduce the LOS coverage blockage probability in indoor LiFi systems

    Joint multi-objective MEH selection and traffic path computation in 5G-MEC systems

    Get PDF
    Multi-access Edge Computing (MEC) is an emerging technology that allows to reduce the service latency and traffic congestion and to enable cloud offloading and context awareness. MEC consists in deploying computing devices, called MEC Hosts (MEHs), close to the user. Given the mobility of the user, several problems rise. The first problem is to select a MEH to run the service requested by the user. Another problem is to select the path to steer the traffic from the user to the selected MEH. The paper jointly addresses these two problems. First, the paper proposes a procedure to create a graph that is able to capture both network-layer and application-layer performance. Then, the proposed graph is used to apply the Multi-objective Dijkstra Algorithm (MDA), a technique used for multi-objective optimization problems, in order to find solutions to the addressed problems by simultaneously considering different performance metrics and constraints. To evaluate the performance of MDA, the paper implements a testbed based on AdvantEDGE and Kubernetes to migrate a VideoLAN application between two MEHs. A controller has been realized to integrate MDA with the 5G-MEC system in the testbed. The results show that MDA is able to perform the migration with a limited impact on the network performance and user experience. The lack of migration would instead lead to a severe reduction of the user experience.publishedVersio

    Serverless Strategies and Tools in the Cloud Computing Continuum

    Full text link
    Tesis por compendio[ES] En los últimos años, la popularidad de la computación en nube ha permitido a los usuarios acceder a recursos de cómputo, red y almacenamiento sin precedentes bajo un modelo de pago por uso. Esta popularidad ha propiciado la aparición de nuevos servicios para resolver determinados problemas informáticos a gran escala y simplificar el desarrollo y el despliegue de aplicaciones. Entre los servicios más destacados en los últimos años se encuentran las plataformas FaaS (Función como Servicio), cuyo principal atractivo es la facilidad de despliegue de pequeños fragmentos de código en determinados lenguajes de programación para realizar tareas específicas en respuesta a eventos. Estas funciones son ejecutadas en los servidores del proveedor Cloud sin que los usuarios se preocupen de su mantenimiento ni de la gestión de su elasticidad, manteniendo siempre un modelo de pago por uso de grano fino. Las plataformas FaaS pertenecen al paradigma informático conocido como Serverless, cuyo propósito es abstraer la gestión de servidores por parte de los usuarios, permitiéndoles centrar sus esfuerzos únicamente en el desarrollo de aplicaciones. El problema del modelo FaaS es que está enfocado principalmente en microservicios y tiende a tener limitaciones en el tiempo de ejecución y en las capacidades de computación (por ejemplo, carece de soporte para hardware de aceleración como GPUs). Sin embargo, se ha demostrado que la capacidad de autoaprovisionamiento y el alto grado de paralelismo de estos servicios pueden ser muy adecuados para una mayor variedad de aplicaciones. Además, su inherente ejecución dirigida por eventos hace que las funciones sean perfectamente adecuadas para ser definidas como pasos en flujos de trabajo de procesamiento de archivos (por ejemplo, flujos de trabajo de computación científica). Por otra parte, el auge de los dispositivos inteligentes e integrados (IoT), las innovaciones en las redes de comunicación y la necesidad de reducir la latencia en casos de uso complejos han dado lugar al concepto de Edge computing, o computación en el borde. El Edge computing consiste en el procesamiento en dispositivos cercanos a las fuentes de datos para mejorar los tiempos de respuesta. La combinación de este paradigma con la computación en nube, formando arquitecturas con dispositivos a distintos niveles en función de su proximidad a la fuente y su capacidad de cómputo, se ha acuñado como continuo de la computación en la nube (o continuo computacional). Esta tesis doctoral pretende, por lo tanto, aplicar diferentes estrategias Serverless para permitir el despliegue de aplicaciones generalistas, empaquetadas en contenedores de software, a través de los diferentes niveles del continuo computacional. Para ello, se han desarrollado múltiples herramientas con el fin de: i) adaptar servicios FaaS de proveedores Cloud públicos; ii) integrar diferentes componentes software para definir una plataforma Serverless en infraestructuras privadas y en el borde; iii) aprovechar dispositivos de aceleración en plataformas Serverless; y iv) facilitar el despliegue de aplicaciones y flujos de trabajo a través de interfaces de usuario. Además, se han creado y adaptado varios casos de uso para evaluar los desarrollos conseguidos.[CA] En els últims anys, la popularitat de la computació al núvol ha permès als usuaris accedir a recursos de còmput, xarxa i emmagatzematge sense precedents sota un model de pagament per ús. Aquesta popularitat ha propiciat l'aparició de nous serveis per resoldre determinats problemes informàtics a gran escala i simplificar el desenvolupament i desplegament d'aplicacions. Entre els serveis més destacats en els darrers anys hi ha les plataformes FaaS (Funcions com a Servei), el principal atractiu de les quals és la facilitat de desplegament de petits fragments de codi en determinats llenguatges de programació per realitzar tasques específiques en resposta a esdeveniments. Aquestes funcions són executades als servidors del proveïdor Cloud sense que els usuaris es preocupen del seu manteniment ni de la gestió de la seva elasticitat, mantenint sempre un model de pagament per ús de gra fi. Les plataformes FaaS pertanyen al paradigma informàtic conegut com a Serverless, el propòsit del qual és abstraure la gestió de servidors per part dels usuaris, permetent centrar els seus esforços únicament en el desenvolupament d'aplicacions. El problema del model FaaS és que està enfocat principalment a microserveis i tendeix a tenir limitacions en el temps d'execució i en les capacitats de computació (per exemple, no té suport per a maquinari d'acceleració com GPU). Tot i això, s'ha demostrat que la capacitat d'autoaprovisionament i l'alt grau de paral·lelisme d'aquests serveis poden ser molt adequats per a més aplicacions. A més, la seva inherent execució dirigida per esdeveniments fa que les funcions siguen perfectament adequades per ser definides com a passos en fluxos de treball de processament d'arxius (per exemple, fluxos de treball de computació científica). D'altra banda, l'auge dels dispositius intel·ligents i integrats (IoT), les innovacions a les xarxes de comunicació i la necessitat de reduir la latència en casos d'ús complexos han donat lloc al concepte d'Edge computing, o computació a la vora. L'Edge computing consisteix en el processament en dispositius propers a les fonts de dades per millorar els temps de resposta. La combinació d'aquest paradigma amb la computació en núvol, formant arquitectures amb dispositius a diferents nivells en funció de la proximitat a la font i la capacitat de còmput, s'ha encunyat com a continu de la computació al núvol (o continu computacional). Aquesta tesi doctoral pretén, doncs, aplicar diferents estratègies Serverless per permetre el desplegament d'aplicacions generalistes, empaquetades en contenidors de programari, a través dels diferents nivells del continu computacional. Per això, s'han desenvolupat múltiples eines per tal de: i) adaptar serveis FaaS de proveïdors Cloud públics; ii) integrar diferents components de programari per definir una plataforma Serverless en infraestructures privades i a la vora; iii) aprofitar dispositius d'acceleració a plataformes Serverless; i iv) facilitar el desplegament d'aplicacions i fluxos de treball mitjançant interfícies d'usuari. A més, s'han creat i s'han adaptat diversos casos d'ús per avaluar els desenvolupaments aconseguits.[EN] In recent years, the popularity of Cloud computing has allowed users to access unprecedented compute, network, and storage resources under a pay-per-use model. This popularity led to new services to solve specific large-scale computing challenges and simplify the development and deployment of applications. Among the most prominent services in recent years are FaaS (Function as a Service) platforms, whose primary appeal is the ease of deploying small pieces of code in certain programming languages to perform specific tasks on an event-driven basis. These functions are executed on the Cloud provider's servers without users worrying about their maintenance or elasticity management, always keeping a fine-grained pay-per-use model. FaaS platforms belong to the computing paradigm known as Serverless, which aims to abstract the management of servers from the users, allowing them to focus their efforts solely on the development of applications. The problem with FaaS is that it focuses on microservices and tends to have limitations regarding the execution time and the computing capabilities (e.g. lack of support for acceleration hardware such as GPUs). However, it has been demonstrated that the self-provisioning capability and high degree of parallelism of these services can be well suited to broader applications. In addition, their inherent event-driven triggering makes functions perfectly suitable to be defined as steps in file processing workflows (e.g. scientific computing workflows). Furthermore, the rise of smart and embedded devices (IoT), innovations in communication networks and the need to reduce latency in challenging use cases have led to the concept of Edge computing. Edge computing consists of conducting the processing on devices close to the data sources to improve response times. The coupling of this paradigm together with Cloud computing, involving architectures with devices at different levels depending on their proximity to the source and their compute capability, has been coined as Cloud Computing Continuum (or Computing Continuum). Therefore, this PhD thesis aims to apply different Serverless strategies to enable the deployment of generalist applications, packaged in software containers, across the different tiers of the Cloud Computing Continuum. To this end, multiple tools have been developed in order to: i) adapt FaaS services from public Cloud providers; ii) integrate different software components to define a Serverless platform on on-premises and Edge infrastructures; iii) leverage acceleration devices on Serverless platforms; and iv) facilitate the deployment of applications and workflows through user interfaces. Additionally, several use cases have been created and adapted to assess the developments achieved.Risco Gallardo, S. (2023). Serverless Strategies and Tools in the Cloud Computing Continuum [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/202013Compendi

    Data Analytics for Dynamic Urban Operations: A Test-Based Study on Data Analytics Efficiency

    Get PDF
    This paper explores the field of data analytics for dynamic urban operations and provides a systematic analysis of the importance and possible implications of this field. Our investigation indicates significant data volumes in an urban setting that is data-rich: 500 GB are generated by traffic sensors, 300 GB by environmental monitors, 150 GB by mobile apps, and 75 GB by emergency calls. A variety of analytics techniques, each with a different processing time, are built upon these data sources. These techniques include descriptive, predictive, prescriptive, and diagnostic analytics. The outcomes, which include 90% accuracy, an average processing time of 40 minutes, 80% resource utilization, and 4.2 user satisfaction ratings, highlight the benefits of data analytics. According to the comparison study, diagnostic analytics has a score of 7.8, indicating room for development, while prescriptive analytics leads with an efficiency score of 8.4. As urban stakeholders and academics work to improve urban systems and solve urban issues, the results give a thorough understanding of the effectiveness and application of data analytics in the context of dynamic urban operations

    Governing ageing in Chile: from neoliberal hegemony to more hopeful demographic futures?

    Get PDF
    In this thesis, I explore how demographic ageing is regulated in Chile through the governing of older populations, with particularly close attention to how the ‘actually existing’ neoliberal context in Chile permeates and conditions diverse political projects and strategies implemented by central and local governments. I approach this shaping as a historical and conjunctural process realised through multiple central and local governing projects, as well as a legacy thrown into particularly sharp relief and retrospective political questioning by the unfolding of the COVID-19 pandemic and the anti-neoliberal social uprising of 2019. These intertwined conjunctural moments have unearthed the limitations of neoliberal strategies in addressing the needs of older people. To explore the governing of older populations in Chile, I undertook a hybrid on-site and online ethnography exploring a wide range of national and local policies and governing projects. In investigating local governing projects, I analysed –with different depths– the case of seven contrasting municipalities in the capital city of Santiago, Chile. With demographic ageing positioned as a risk to economic development, I suggest that the main rationale guiding Chilean policies and programs has been to avert the central state’s welfare and caregiving responsibilities toward a growing number of potentially dependent populations; economically, physically and cognitively. I argue that governing strategies directed to older populations are deeply neoliberal –sometimes deliberately and sometimes inadvertently– in that they pervasively have been designed to shift and devolve welfare and caregiving responsibilities to different (non-central state) scales such as families and charitable institutions, local governments, communities and older people themselves. In these explorations, I also consider more closely alternative governing projects that have contested, to differing extents, the central state's neoliberal neglect. Unpacking how progressive governing projects at central and local levels have sought to imprint a different common sense on state responsibility, I also consider how these alternative projects have themselves been reshaped by neoliberal ideas and strategies. In this case, I argue that neoliberal ideas and strategies, together with the material effects of Chile’s neoliberal context, are holding back the advances of progressive governing projects. Nonetheless, as hegemony is never final, I also consider how the intertwined moments of the COVID-19 pandemic and the anti-neoliberal social uprising of October 2019 also shed light on how the history of neoliberal policies directed at older populations in Chile continues to be contested. Scholarly understandings of neoliberalism as a political hegemonic project are central to this thesis’ argument. I draw on Gramsci’s notion of hegemony as a position of ‘leadership’ continuously constructed through the intertwined articulation of coercion and consent (Hall 1986, p.15), to unpack how neoliberal ideas and strategies have reached a position of leadership in the governing of demographic ageing amid opposition from alternative governing ideas and projects. Three crosscutting findings emerge from this research: 1) through a marked politics of devolution within Chilean governance, access to welfare and caregiving has been rendered deeply unequal with old age; 2) the hegemonising capacity of neoliberal ideas and strategies is revealed in the persistence of the central state’s politics of scalar devolution and ways in which would-be progressive local governing projects end up complying with neoliberal aims; 3) though neoliberal hegemony has been secured thus far in this case through multiple strategies, it continues to be subject to contestation. Such findings offer insights for building more hopeful demographic ageing futures

    Dataflow Programming and Acceleration of Computationally-Intensive Algorithms

    Get PDF
    The volume of unstructured textual information continues to grow due to recent technological advancements. This resulted in an exponential growth of information generated in various formats, including blogs, posts, social networking, and enterprise documents. Numerous Enterprise Architecture (EA) documents are also created daily, such as reports, contracts, agreements, frameworks, architecture requirements, designs, and operational guides. The processing and computation of this massive amount of unstructured information necessitate substantial computing capabilities and the implementation of new techniques. It is critical to manage this unstructured information through a centralized knowledge management platform. Knowledge management is the process of managing information within an organization. This involves creating, collecting, organizing, and storing information in a way that makes it easily accessible and usable. The research involved the development textual knowledge management system, and two use cases were considered for extracting textual knowledge from documents. The first case study focused on the safety-critical documents of a railway enterprise. Safety is of paramount importance in the railway industry. There are several EA documents including manuals, operational procedures, and technical guidelines that contain critical information. Digitalization of these documents is essential for analysing vast amounts of textual knowledge that exist in these documents to improve the safety and security of railway operations. A case study was conducted between the University of Huddersfield and the Railway Safety Standard Board (RSSB) to analyse EA safety documents using Natural language processing (NLP). A graphical user interface was developed that includes various document processing features such as semantic search, document mapping, text summarization, and visualization of key trends. For the second case study, open-source data was utilized, and textual knowledge was extracted. Several features were also developed, including kernel distribution, analysis offkey trends, and sentiment analysis of words (such as unique, positive, and negative) within the documents. Additionally, a heterogeneous framework was designed using CPU/GPU and FPGAs to analyse the computational performance of document mapping
    • …
    corecore