787 research outputs found

    CN2F: A Cloud-Native Cellular Network Framework

    Full text link
    Upcoming 5G and Beyond 5G (B5G) cellular networks aim to improve the efficiency and flexibility of mobile networks by incorporating various technologies, such as Software Defined Networking (SDN), Network Function Virtualization (NFV), and Network Slicing (NS). In this paper, we share our findings, accompanied by a comprehensive online codebase, about the best practice of using different open-source projects in order to realize a flexible testbed for academia and industrial Research and Development (R&D) activities on the future generation of cellular networks. In particular, a Cloud-Native Cellular Network Framework (CN2F) is presented which uses OpenAirInterface's codebase to generate cellular Virtual Network Functions (VNFs) and deploys Kubernetes to disperse and manage them among some worker nodes. Moreover, CN2F leverages ONOS and Mininet to emulate the effect of the IP transport networks in the fronthaul and backhaul of real cellular networks. In this paper, we also showcase two use cases of CN2F to demonstrate the importance of Edge Computing (EC) and the capability of Radio Access Network (RAN) slicing

    Doing Research. Wissenschaftspraktiken zwischen Positionierung und Suchanfrage

    Get PDF
    Forschung wird zunehmend aus Sicht ihrer Ergebnisse gedacht - nicht zuletzt aufgrund der Umwälzungen im System Wissensschaft. Der Band lenkt den Fokus jedoch auf diejenigen Prozesse, die Forschungsergebnisse erst ermöglichen und Wissenschaft konturieren. Dabei ist der Titel Doing Research als Verweis darauf zu verstehen, dass forschendes Handeln von spezifischen Positionierungen, partiellen Perspektiven und Suchbewegungen geformt ist. So knüpfen alle Beitragenden auf reflexive Weise an ihre jeweiligen Forschungspraktiken an. Ausgangspunkt sind Abkürzungen - die vermeintlich kleinsten Einheiten wissenschaftlicher Aushandlung und Verständigung. Der in den Erziehungs-, Sozial-, Medien- und Kunstwissenschaften verankerte Band zeichnet ein vieldimensionales Bild gegenwärtigen Forschens mit transdisziplinären Anknüpfungspunkten zwischen Digitalität und Bildung. (DIPF/Orig.

    Drones, Signals, and the Techno-Colonisation of Landscape

    Get PDF
    This research project is a cross-disciplinary, creative practice-led investigation that interrogates increasing military interest in the electromagnetic spectrum (EMS). The project’s central argument is that painted visualisations of normally invisible aspects of contemporary EMS-enabled warfare can reveal useful, novel, and speculative but informed perspectives that contribute to debates about war and technology. It pays particular attention to how visualising normally invisible signals reveals an insidious techno-colonisation of our extended environment from Earth to orbiting satellites

    Machine learning for the sustainable energy transition: a data-driven perspective along the value chain from manufacturing to energy conversion

    Get PDF
    According to the special report Global Warming of 1.5 °C of the IPCC, climate action is not only necessary but more than ever urgent. The world is witnessing rising sea levels, heat waves, events of flooding, droughts, and desertification resulting in the loss of lives and damage to livelihoods, especially in countries of the Global South. To mitigate climate change and commit to the Paris agreement, it is of the uttermost importance to reduce greenhouse gas emissions coming from the most emitting sector, namely the energy sector. To this end, large-scale penetration of renewable energy systems into the energy market is crucial for the energy transition toward a sustainable future by replacing fossil fuels and improving access to energy with socio-economic benefits. With the advent of Industry 4.0, Internet of Things technologies have been increasingly applied to the energy sector introducing the concept of smart grid or, more in general, Internet of Energy. These paradigms are steering the energy sector towards more efficient, reliable, flexible, resilient, safe, and sustainable solutions with huge environmental and social potential benefits. To realize these concepts, new information technologies are required, and among the most promising possibilities are Artificial Intelligence and Machine Learning which in many countries have already revolutionized the energy industry. This thesis presents different Machine Learning algorithms and methods for the implementation of new strategies to make renewable energy systems more efficient and reliable. It presents various learning algorithms, highlighting their advantages and limits, and evaluating their application for different tasks in the energy context. In addition, different techniques are presented for the preprocessing and cleaning of time series, nowadays collected by sensor networks mounted on every renewable energy system. With the possibility to install large numbers of sensors that collect vast amounts of time series, it is vital to detect and remove irrelevant, redundant, or noisy features, and alleviate the curse of dimensionality, thus improving the interpretability of predictive models, speeding up their learning process, and enhancing their generalization properties. Therefore, this thesis discussed the importance of dimensionality reduction in sensor networks mounted on renewable energy systems and, to this end, presents two novel unsupervised algorithms. The first approach maps time series in the network domain through visibility graphs and uses a community detection algorithm to identify clusters of similar time series and select representative parameters. This method can group both homogeneous and heterogeneous physical parameters, even when related to different functional areas of a system. The second approach proposes the Combined Predictive Power Score, a method for feature selection with a multivariate formulation that explores multiple sub-sets of expanding variables and identifies the combination of features with the highest predictive power over specified target variables. This method proposes a selection algorithm for the optimal combination of variables that converges to the smallest set of predictors with the highest predictive power. Once the combination of variables is identified, the most relevant parameters in a sensor network can be selected to perform dimensionality reduction. Data-driven methods open the possibility to support strategic decision-making, resulting in a reduction of Operation & Maintenance costs, machine faults, repair stops, and spare parts inventory size. Therefore, this thesis presents two approaches in the context of predictive maintenance to improve the lifetime and efficiency of the equipment, based on anomaly detection algorithms. The first approach proposes an anomaly detection model based on Principal Component Analysis that is robust to false alarms, can isolate anomalous conditions, and can anticipate equipment failures. The second approach has at its core a neural architecture, namely a Graph Convolutional Autoencoder, which models the sensor network as a dynamical functional graph by simultaneously considering the information content of individual sensor measurements (graph node features) and the nonlinear correlations existing between all pairs of sensors (graph edges). The proposed neural architecture can capture hidden anomalies even when the turbine continues to deliver the power requested by the grid and can anticipate equipment failures. Since the model is unsupervised and completely data-driven, this approach can be applied to any wind turbine equipped with a SCADA system. When it comes to renewable energies, the unschedulable uncertainty due to their intermittent nature represents an obstacle to the reliability and stability of energy grids, especially when dealing with large-scale integration. Nevertheless, these challenges can be alleviated if the natural sources or the power output of renewable energy systems can be forecasted accurately, allowing power system operators to plan optimal power management strategies to balance the dispatch between intermittent power generations and the load demand. To this end, this thesis proposes a multi-modal spatio-temporal neural network for multi-horizon wind power forecasting. In particular, the model combines high-resolution Numerical Weather Prediction forecast maps with turbine-level SCADA data and explores how meteorological variables on different spatial scales together with the turbines' internal operating conditions impact wind power forecasts. The world is undergoing a third energy transition with the main goal to tackle global climate change through decarbonization of the energy supply and consumption patterns. This is not only possible thanks to global cooperation and agreements between parties, power generation systems advancements, and Internet of Things and Artificial Intelligence technologies but also necessary to prevent the severe and irreversible consequences of climate change that are threatening life on the planet as we know it. This thesis is intended as a reference for researchers that want to contribute to the sustainable energy transition and are approaching the field of Artificial Intelligence in the context of renewable energy systems

    Bicycle maps. 2D representations for routing and decision making optimization at local scale

    Get PDF
    Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial TechnologiesUrban cycling maps play a crucial role in promoting cycling and improving the cycling experience. These maps provide essential information for bicycle mobility, including geographic information, points of interest, and mobility-oriented elements. However, the lack of shared knowledge on how to create these maps limits their practicality and use, resulting in shortcomings in terms of content and style. To address this issue, this study begins by analyzing the needs of the cycling community to identify the necessary components that must appear on this kind of maps. The study also examines the best practices among existing maps, both in terms of content and design. The research highlights the need to establish standards that unify the criteria used in preparing these maps, with particular emphasis on the inclusion of essential items, their representation, and the depiction of notable elevation gains. This work presents a set of standardised criteria, which were verified through a questionnaire. Finally, the study presents the final standards, accompanied by a map that illustrates these criteria.Els mapes urbans per bicicletes tenen un paper bàsic en la promoció de l’ús de la bicicleta i en la millora de l’experiència ciclista. Proporcionen informació crucial per la mobilitat en bicicleta, incloent-hi informació geogràfica, punts d’interès i elements orientats a la mobilitat. Actualment, no hi ha un coneixement compartit en la manera de fer aquests mapes, fet que en limita la seva practicitat i ús, amb mancances pel que fa a contingut i estil. En aquest context, aquest treball parteix de l’anàlisi de les necessitats del col·lectiu ciclista per identificar els ítems que han d’aparèixer de manera indispensable als mapes ciclistes d’entorns urbans. També recull les millors pràctiques d’entre els mapes existents, tant en l’àmbit de contingut com de disseny. La recerca planteja la necessitat de crear uns estàndards que unifiquin els criteris que es tindran en compte per l’elaboració d’aquests mapes, amb especial atenció als ítems continguts, a la seva representació i a la manera en què seran representats els indrets amb desnivells pronunciats. El present treball presenta una primera estandardització d’aquests criteris, que són verificats a través d’un qüestionari. En últim lloc, presenta els estàndards definitius, acompanyats d’un mapa que els il·lustra

    Towards the cross-identification of radio galaxies with machine learning and the effect of radio-loud AGN on galaxy evolution

    Get PDF
    It is now well established that active galactic nuclei (AGN) play a fundamental role in galaxy evolution. On cosmic scales, the evolution over cosmic time of the star-formation rate density and black hole accretion rate appear to be closely related, and on galactic scales, the mass of the stellar bulge is tightly correlated to the mass of the black hole. In particular, radio-loud AGN, which are characterised by powerful jets extending hundreds of kiloparsecs from the galaxy, make a significant contribution to the evolution of the most massive galaxies. There exists a correlation between the prevalence of radio-loud AGN and the stellar and black hole masses, with the stellar mass being the stronger driver of AGN activity. Furthermore, essentially all of the most massive galaxies host a radio-loud AGN. AGN feedback is the strongest candidate for driving the quenching of star-formation activity, in particular at galaxies at the highest masses, as it is capable of maintaining these galaxies as "red and dead". However, the precise mechanisms by which AGN influence galaxy evolution remain poorly understood. The anticipation of the Square Kilometre Array (SKA) brought radio astronomy into a revolutionary new era. New-generation radio telescopes have been built to develop and test new technologies while addressing different scientific questions. These have already detected a large number of sources and many previously unknown galaxies. One of these telescopes is the Low Frequency Array (LOFAR), which has been conducting an extensive survey across the entire northern sky called the LOFAR Two-Metre Sky Survey (LoTSS). In LoTSS, the source density is higher than in any existing large-area radio survey, and in less than a third of the survey, LoTSS already detected more than 4 million radio sources. The large size of the LoTSS samples already allows the separation of the AGNs into bins of stellar mass, environment, black hole mass, star formation rate, and morphology independently, thus enabling the breaking of degeneracies between the different parameters. The radio, long used to identify and study AGNs, is a powerful tool when radio sources are matched to their optically identified host galaxies. This "cross-matching" process typically depends on a combination of statistical approaches and visual inspection. For compact sources, cross-matching is traditionally achieved using statistical methods. The task becoms significantly more difficult when the radio emission is extended, split into multiple radio components, or when the host galaxy is not detected in the optical. In these cases, sources need to be inspected, radio components need to be eventually associated together into physical sources, and then radio sources need to be cross-matched with their optical and/or infrared counterparts. With recent radio continuum surveys growing massively in size, it is now extremely laborious to visually cross-match more than a small fraction of the total sources. The new high-sensitivity radio telescopes are also better at detecting complex radio structures, resulting in an increase in the number of radio sources whose radio emission is separated into different radio components. In addition, due to a higher density of objects, more compact sources can be randomly positioned close enough to resemble extended sources. Consequently, the cross-matching of radio galaxies with their optical counterparts is becoming increasingly difficult. It is crucial to minimise the extent of unnecessary inspection, with the present cross-matching systems demanding improvement. In this thesis, I use Machine Learning (ML) to investigate solutions to improve the cross-matching process. ML is a rapidly evolving technique that has recently benefited from a vast increase in data availability, increased computing power, and significantly improved algorithms. ML is gaining popularity in the field of astronomy, and it is undoubtedly the most promising technique for managing the large radio astronomy datasets, while having available at the same time the amount of data required to train ML algorithms. Part of the work in this thesis was indeed focused on creating a dataset based on visual inspections of the first data release of the LoTSS survey (LoTSS DR1) in order to train and cross-validate the ML models, and apply the results to the second data release (LoTSS DR2). I trained tree-based ML models using this dataset to determine whether a statistical match is reliable. In particular, I implemented a classifier to identify the sources for which a statistical match to optical and infrared catalogues by likelihood ratio is not reliable in order to select radio sources for visual inspection. I used the properties of the radio sources, the Gaussians that compose a source, the neighbouring radio sources, as well as the optical counterparts. The best model, a gradient boosting classifier, achieves an accuracy of 95% on a balanced dataset and 96% on real unbalanced data after optimising the classification threshold. The results were incorporated in the cross-matching of LoTSS DR2. I further present a deep learning classifier for identifying sources that require radio component association. In order to improve spatial and local information about the radio sources, I create a multi-modal model that makes use of different types of input data, with a convolutional network component of the model receiving radio images as input and a neural network component using parameters measured from the radio source and its near neighbours. The model helps to recover 94% of the sources with multiple components in balanced dataset and has an accuracy of 97% on real unbalanced data. The method has already been applied with success to properly identify sources that require component association in order to get the correct radio fluxes for AGN population studies. The ML techniques used in this work can be adapted to other radio surveys. Furthermore, ML will be crucial to dealing with the next radio surveys, in particular for source detection, identification and cross-matching, where only with reliable source identification is it possible to combine radio data with other data at different wavelengths and maximally exploit the scientific potential of the radio data. The use of deep learning, in particular testing ways of combining different data types, can bring further advantages, as it may help with the comprehension of data with different origins. This is particularly important for any upcoming data integration within the SKA. Finally, I used the results of cross-matching the LoTSS DR2 data to understand the interaction between radio-loud AGN, the host galaxy, and the surrounding environment. Specifically, the investigation focused on the properties of the hosts of radio-loud AGN, such as stellar mass, bulge mass, and black hole mass, as well as morphology and environmental factors. The results consistently support the significant influence of stellar mass on radio-AGN activity. It was found that galaxy morphology (i.e. ellipticals vs. spirals) has a negligible dependence on AGN activity unless at higher masses, but those correlate with stellar mass as well as with the environment. The most relevant factor for radio AGN prevalence, after controlling for stellar mass, emerged as higher-density environments, in particular on a global scale. These outcomes provide valuable insights into the triggering and fuelling mechanisms of radio-loud AGN, aligning with cooling flow models and improving our understanding of the phenomenon

    Procedural Constraint-based Generation for Game Development

    Get PDF

    Value Creation with Extended Reality Technologies - A Methodological Approach for Holistic Deployments

    Get PDF
    Mit zunehmender Rechenkapazität und Übertragungsleistung von Informationstechnologien wächst die Anzahl möglicher Anwendungs-szenarien für Extended Reality (XR)-Technologien in Unternehmen. XR-Technologien sind Hardwaresysteme, Softwaretools und Methoden zur Erstellung von Inhalten, um Virtual Reality, Augmented Reality und Mixed Reality zu erzeugen. Mit der Möglichkeit, Nutzern Inhalte auf immersive, interaktive und intelligente Weise zu vermitteln, können XR-Technologien die Produktivität in Unternehmen steigern und Wachstumschancen eröffnen. Obwohl XR-Anwendungen in der Industrie seit mehr als 25 Jahren wissenschaftlich erforscht werden, gelten nach wie vor als unausgereift. Die Hauptgründe dafür sind die zugrundeliegende Komplexität, die Fokussierung der Forschung auf die Untersuchung spezifische Anwendungsszenarien, die unzu-reichende Wirtschaftlichkeit von Einsatzszenarien und das Fehlen von geeigneten Implementierungsmodellen für XR-Technologien. Grundsätzlich wird der Mehrwert von Technologien durch deren Integration in die Wertschöpfungsarchitektur von Geschäftsmodellen freigesetzt. Daher wird in dieser Arbeit eine Methodik für den Einsatz von XR-Technologien in der Wertschöpfung vorgestellt. Das Hauptziel der Methodik ist es, die Identifikation geeigneter Einsatzszenarien zu ermöglichen und mit einem strukturierten Ablauf die Komplexität der Umsetzung zu beherrschen. Um eine ganzheitliche Anwendbarkeit zu ermöglichen, basiert die Methodik auf einem branchen- und ge-schäftsprozessunabhängigen Wertschöpfungsreferenzmodell. Dar-über hinaus bezieht sie sich auf eine ganzheitliche Morphologie von XR-Technologien und folgt einer iterativen Einführungssequenz. Das Wertschöpfungsmodell wird durch ein vorliegendes Potential, eine Wertschöpfungskette, ein Wertschöpfungsnetzwerk, physische und digitale Ressourcen sowie einen durch den Einsatz von XR-Technologien realisierten Mehrwert repräsentiert. XR-Technologien werden durch eine morphologische Struktur mit Anwendungsmerk-malen und erforderlichen technologischen Ressourcen repräsentiert. Die Umsetzung erfolgt in einer iterativen Sequenz, die für den zu-grundeliegenden Kontext anwendbare Methoden der agilen Soft-wareentwicklung beschreibt und relevante Stakeholder berücksich-tigt. Der Schwerpunkt der Methodik liegt auf einem systematischen Ansatz, der universell anwendbar ist und den Endnutzer und das Ökosystem der betrachteten Wertschöpfung berücksichtigt. Um die Methodik zu validieren, wird der Einsatz von XR-Technologien in zwei industriellen Anwendungsfällen unter realen wirtschaftlichen Bedingungen durchgeführt. Die Anwendungsfälle stammen aus unterschiedlichen Branchen, mit unterschiedlichen XR-Technologiemerkmalen sowie unterschiedlichen Formen von Wert-schöpfungsketten, um die universelle Anwendbarkeit der Methodik zu demonstrieren und relevante Herausforderungen bei der Durch-führung eines XR-Technologieeinsatzes aufzuzeigen. Mit Hilfe der vorgestellten Methodik können Unternehmen XR-Technologien zielgerichtet in ihrer Wertschöpfung einsetzen. Sie ermöglicht eine detaillierte Planung der Umsetzung, eine fundierte Auswahl von Anwendungsszenarien, die Bewertung möglicher Her-ausforderungen und Hindernisse sowie die gezielte Einbindung der relevanten Stakeholder. Im Ergebnis wird die Wertschöpfung mit wirtschaftlichem Mehrwert durch XR-Technologien optimiert

    Towards a fully unstructured ocean model for ice shelf cavity environments: Model development and verification using the Firedrake finite element framework

    Get PDF
    Numerical studies of ice flow have consistently identified the grounding zone of outlet glaciers and ice streams (the region where ice starts to float) as crucial for predicting the rate of grounded ice loss to the ocean. Owing to the extreme environments and difficulty of access to ocean cavities beneath ice shelves, field observations are rare. Estimates of melt rates derived from satellites are also difficult to make near grounding zones with confidence. Therefore, numerical ocean models are important tools to investigate these critical and remote regions. The relative inflexibility of structured grid models means, however, that they can struggle to resolve these processes in irregular cavity geometries near grounding zones. To help solve this issue, we present a new nonhydrostatic unstructured mesh model for flow under ice shelves built using the Firedrake finite element framework. We demonstrate our ability to simulate full ice shelf cavity domains using the community standard ISOMIP+ Ocean0 test case and compare our results against those obtained with the popular MITgcm model. Good agreement is found between the two models, despite their use of different discretisation schemes and the sensitivity of the melt rate parameterisation to grid resolution. Verification tests based on the Method of Manufactured Solutions (MMS) show that the new model discretisation is sound and second-order accurate. A main driver behind using Firedrake is the availability of an automatically generated adjoint model. Our first adjoint calculations, of sensitivities of melt rate with respect to different inputs in an idealised grounding zone domain, are promising and point to the ability to address a number of important questions on ocean influence on ice shelf vulnerability in the future
    corecore