403 research outputs found

    Asynchronous federated and reinforcement learning for mobility-aware edge caching in IoVs

    Get PDF
    Edge caching is a promising technology to reduce backhaul strain and content access delay in Internet-of-Vehicles (IoVs). It pre-caches frequently-used contents close to vehicles through intermediate roadside units. Previous edge caching works often assume that content popularity is known in advance or obeys simplified models. However, such assumptions are unrealistic, as content popularity varies with uncertain spatial-temporal traffic demands in IoVs. Federated learning (FL) enables vehicles to predict popular content with distributed training. It preserves the training data remain local, thereby addressing privacy concerns and communication resource shortages. This paper investigates a mobility-aware edge caching strategy by exploiting asynchronous FL and Deep Reinforcement Learning (DRL). We first implement a novel asynchronous FL framework for local updates and global aggregation of Stacked AutoEncoder (SAE) models. Then, utilizing the latent features extracted by the trained SAE model, we adopt a hybrid filtering model for predicting and recommending popular content. Furthermore, we explore intelligent caching decisions after content prediction. Based on the formulated Markov Decision Process (MDP) problem, we propose a DRL-based solution, and adopt neural network-based parameter approximations for the curse of dimensionality in RL. Extensive simulations are conducted based on real-world data trajectory. Especially, our proposed method outperforms FedAvg, LRU, and NoDRL, and the edge hit rate is improved by roughly 6%, 21%, and 15%, respectively, when the cache capacity reaches 350 MB

    Supporting Safety Analysis of Deep Neural Networks with Automated Debugging and Repair

    Get PDF

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    Online Machine Learning for Inference from Multivariate Time-series

    Get PDF
    Inference and data analysis over networks have become significant areas of research due to the increasing prevalence of interconnected systems and the growing volume of data they produce. Many of these systems generate data in the form of multivariate time series, which are collections of time series data that are observed simultaneously across multiple variables. For example, EEG measurements of the brain produce multivariate time series data that record the electrical activity of different brain regions over time. Cyber-physical systems generate multivariate time series that capture the behaviour of physical systems in response to cybernetic inputs. Similarly, financial time series reflect the dynamics of multiple financial instruments or market indices over time. Through the analysis of these time series, one can uncover important details about the behavior of the system, detect patterns, and make predictions. Therefore, designing effective methods for data analysis and inference over networks of multivariate time series is a crucial area of research with numerous applications across various fields. In this Ph.D. Thesis, our focus is on identifying the directed relationships between time series and leveraging this information to design algorithms for data prediction as well as missing data imputation. This Ph.D. thesis is organized as a compendium of papers, which consists of seven chapters and appendices. The first chapter is dedicated to motivation and literature survey, whereas in the second chapter, we present the fundamental concepts that readers should understand to grasp the material presented in the dissertation with ease. In the third chapter, we present three online nonlinear topology identification algorithms, namely NL-TISO, RFNL-TISO, and RFNL-TIRSO. In this chapter, we assume the data is generated from a sparse nonlinear vector autoregressive model (VAR), and propose online data-driven solutions for identifying nonlinear VAR topology. We also provide convergence guarantees in terms of dynamic regret for the proposed algorithm RFNL-TIRSO. Chapters four and five of the dissertation delve into the issue of missing data and explore how the learned topology can be leveraged to address this challenge. Chapter five is distinct from other chapters in its exclusive focus on edge flow data and introduces an online imputation strategy based on a simplicial complex framework that leverages the known network structure in addition to the learned topology. Chapter six of the dissertation takes a different approach, assuming that the data is generated from nonlinear structural equation models. In this chapter, we propose an online topology identification algorithm using a time-structured approach, incorporating information from both the data and the model evolution. The algorithm is shown to have convergence guarantees achieved by bounding the dynamic regret. Finally, chapter seven of the dissertation provides concluding remarks and outlines potential future research directions.publishedVersio

    Teaching Unknown Objects by Leveraging Human Gaze and Augmented Reality in Human-Robot Interaction

    Get PDF
    Roboter finden aufgrund ihrer außergewöhnlichen Arbeitsleistung, Präzision, Effizienz und Skalierbarkeit immer mehr Verwendung in den verschiedensten Anwendungsbereichen. Diese Entwicklung wurde zusätzlich begünstigt durch Fortschritte in der Künstlichen Intelligenz (KI), insbesondere im Maschinellem Lernen (ML). Mit Hilfe moderner neuronaler Netze sind Roboter in der Lage, Objekte in ihrer Umgebung zu erkennen und mit ihnen zu interagieren. Ein erhebliches Manko besteht jedoch darin, dass das Training dieser Objekterkennungsmodelle, in aller Regel mit einer zugrundeliegenden Abhängig von umfangreichen Datensätzen und der Verfügbarkeit großer Datenmengen einhergeht. Dies ist insbesondere dann problematisch, wenn der konkrete Einsatzort des Roboters und die Umgebung, einschließlich der darin befindlichen Objekte, nicht im Voraus bekannt sind. Die breite und ständig wachsende Palette von Objekten macht es dabei praktisch unmöglich, das gesamte Spektrum an existierenden Objekten allein mit bereits zuvor erstellten Datensätzen vollständig abzudecken. Das Ziel dieser Dissertation war es, einem Roboter unbekannte Objekte mit Hilfe von Human-Robot Interaction (HRI) beizubringen, um ihn von seiner Abhängigkeit von Daten sowie den Einschränkungen durch vordefinierte Szenarien zu befreien. Die Synergie von Eye Tracking und Augmented Reality (AR) ermöglichte es dem als Lehrer fungierenden Menschen, mit dem Roboter zu kommunizieren und ihn mittels des menschlichen Blickes auf Objekte hinzuweisen. Dieser holistische Ansatz ermöglichte die Konzeption eines multimodalen HRI-Systems, durch das der Roboter Objekte identifizieren und dreidimensional segmentieren konnte, obwohl sie ihm zu diesem Zeitpunkt noch unbekannt waren, um sie anschließend aus unterschiedlichen Blickwinkeln eigenständig zu inspizieren. Anhand der Klasseninformationen, die ihm der Mensch mitteilte, war der Roboter daraufhin in der Lage, die entsprechenden Objekte zu erlernen und später wiederzuerkennen. Mit dem Wissen, das dem Roboter durch diesen auf HRI basierenden Lehrvorgang beigebracht worden war, war dessen Fähigkeit Objekte zu erkennen vergleichbar mit den Fähigkeiten modernster Objektdetektoren, die auf umfangreichen Datensätzen trainiert worden waren. Dabei war der Roboter jedoch nicht auf vordefinierte Klassen beschränkt, was seine Vielseitigkeit und Anpassungsfähigkeit unter Beweis stellte. Die im Rahmen dieser Dissertation durchgeführte Forschung leistete bedeutende Beiträge an der Schnittstelle von Machine Learning (ML), AR, Eye Tracking und Robotik. Diese Erkenntnisse tragen nicht nur zum besseren Verständnis der genannten Felder bei, sondern ebnen auch den Weg für weitere interdisziplinäre Forschung. Die in dieser Dissertation enthalten wissenschaftlichen Artikel wurden auf hochrangigen Konferenzen in den Bereichen Robotik, Eye Tracking und HRI veröffentlicht.Robots are becoming increasingly popular in a wide range of environments due to their exceptional work capacity, precision, efficiency, and scalability. This development has been further encouraged by advances in Artificial Intelligence (AI), particularly Machine Learning (ML). By employing sophisticated neural networks, robots are given the ability to detect and interact with objects in their vicinity. However, a significant drawback arises from the underlying dependency on extensive datasets and the availability of substantial amounts of training data for these object detection models. This issue becomes particularly problematic when the specific deployment location of the robot and the surroundings, including the objects within it, are not known in advance. The vast and ever-expanding array of objects makes it virtually impossible to comprehensively cover the entire spectrum of existing objects using preexisting datasets alone. The goal of this dissertation was to teach a robot unknown objects in the context of Human-Robot Interaction (HRI) in order to liberate it from its data dependency, unleashing it from predefined scenarios. In this context, the combination of eye tracking and Augmented Reality (AR) created a powerful synergy that empowered the human teacher to seamlessly communicate with the robot and effortlessly point out objects by means of human gaze. This holistic approach led to the development of a multimodal HRI system that enabled the robot to identify and visually segment the Objects of Interest (OOIs) in three-dimensional space, even though they were initially unknown to it, and then examine them autonomously from different angles. Through the class information provided by the human, the robot was able to learn the objects and redetect them at a later stage. Due to the knowledge gained from this HRI based teaching process, the robot’s object detection capabilities exhibited comparable performance to state-of-the-art object detectors trained on extensive datasets, without being restricted to predefined classes, showcasing its versatility and adaptability. The research conducted within the scope of this dissertation made significant contributions at the intersection of ML, AR, eye tracking, and robotics. These findings not only enhance the understanding of these fields, but also pave the way for further interdisciplinary research. The scientific articles included in this dissertation have been published at high-impact conferences in the fields of robotics, eye tracking, and HRI

    Proceedings of the 8th Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE 2023)

    Get PDF
    This volume gathers the papers presented at the Detection and Classification of Acoustic Scenes and Events 2023 Workshop (DCASE2023), Tampere, Finland, during 21–22 September 2023

    SuperCDMS HVeV Run 2 Low-Mass Dark Matter Search, Highly Multiplexed Phonon-Mediated Particle Detector with Kinetic Inductance Detector, and the Blackbody Radiation in Cryogenic Experiments

    Get PDF
    There is ample evidence of dark matter (DM), a phenomenon responsible for ≈ 85% of the matter content of the Universe that cannot be explained by the Standard Model (SM). One of the most compelling hypotheses is that DM consists of beyond-SM particle(s) that are nonluminous and nonbaryonic. So far, numerous efforts have been made to search for particle DM, and yet none has yielded an unambiguous observation of DM particles. We present in Chapter 2 the SuperCDMS HVeV Run 2 experiment, where we search for DM in the mass ranges of 0.5--10⁴ MeV/c² for the electron-recoil DM and 1.2--50 eV/c² for the dark photon and the Axion-like particle (ALP). SuperCDMS utilizes cryogenic crystals as detectors to search for DM interaction with the crystal atoms. The interaction is detected in the form of recoil energy mediated by phonons. In the HVeV project, we look for electron recoil, where we enhance the signal by the Neganov-Trofimov-Luke effect under high-voltage biases. The technique enabled us to detect quantized e⁻h⁺ creation at a 3% ionization energy resolution. Our work is the first DM search analysis considering charge trapping and impact ionization effects for solid-state detectors. We report our results as upper limits for the assumed particle models as functions of DM mass. Our results exclude the DM-electron scattering cross section, the dark photon kinetic mixing parameter, and the ALP axioelectric coupling above 8.4 x 10⁻³⁴ cm², 3.3 x 10⁻¹⁴, and 1.0 x 10⁻⁹, respectively. Currently every SuperCDMS detector is equipped with a few phonon sensors based on the transition-edge sensor (TES) technology. In order to improve phonon-mediated particle detectors' background rejection performance, we are developing highly multiplexed detectors utilizing kinetic inductance detectors (KIDs) as phonon sensors. This work is detailed in chapter 3 and chapter 4. We have improved our previous KID and readout line designs, which enabled us to produce our first ø3" detector with 80 phonon sensors. The detector yielded a frequency placement accuracy of 0.07%, indicating our capability of implementing hundreds of phonon sensors in a typical SuperCDMS-style detector. We detail our fabrication technique for simultaneously employing Al and Nb for the KID circuit. We explain our signal model that includes extracting the RF signal, calibrating the RF signal into pair-breaking energy, and then the pulse detection. We summarize our noise condition and develop models for different noise sources. We combine the signal and the noise models to be an energy resolution model for KID-based phonon-mediated detectors. From this model, we propose strategies to further improve future detectors' energy resolution and introduce our ongoing implementations. Blackbody (BB) radiation is one of the plausible background sources responsible for the low-energy background currently preventing low-threshold DM experiments to search for lower DM mass ranges. In Chapter 5, we present our study for such background for cryogenic experiments. We have developed physical models and, based on the models, simulation tools for BB radiation propagation as photons or waves. We have also developed a theoretical model for BB photons' interaction with semiconductor impurities, which is one of the possible channels for generating the leakage current background in SuperCDMS-style detectors. We have planned for an experiment to calibrate our simulation and leakage current generation model. For the experiment, we have developed a specialized ``mesh TES'' photon detector inspired by cosmic microwave background experiments. We present its sensitivity model, the radiation source developed for the calibration, and the general plan of the experiment.</p

    Machine Learning Algorithm for the Scansion of Old Saxon Poetry

    Get PDF
    Several scholars designed tools to perform the automatic scansion of poetry in many languages, but none of these tools deal with Old Saxon or Old English. This project aims to be a first attempt to create a tool for these languages. We implemented a Bidirectional Long Short-Term Memory (BiLSTM) model to perform the automatic scansion of Old Saxon and Old English poems. Since this model uses supervised learning, we manually annotated the Heliand manuscript, and we used the resulting corpus as labeled dataset to train the model. The evaluation of the performance of the algorithm reached a 97% for the accuracy and a 99% of weighted average for precision, recall and F1 Score. In addition, we tested the model with some verses from the Old Saxon Genesis and some from The Battle of Brunanburh, and we observed that the model predicted almost all Old Saxon metrical patterns correctly misclassified the majority of the Old English input verses

    Enabling Precision Fertilisers Application Using Digital Soil Mapping in Australian Sugarcane Areas

    Full text link
    Sugar is Australia's second largest export crop after wheat, generating a total annual revenue of almost $2 billion. It is produced from sugarcane, with approximately 95% grown in Queensland. While highly productive and contributing to the area’s economic sustainability, the soils in these areas have low fertility. The soils typically contain sand content > 60%, low organic carbon (SOC 6%). Hence, sugarcane farmers need to apply fertilisers and ameliorants to maintain soil quality and productivity. Unfortunately, the high intensity rainfall in the region results in sediments, nutrients, and ameliorants run-off from these farms, resulting in environmental degradation and threats to marine ecology in the adjacent World Heritage Listed Great Barrier Reef. To mitigate these issues, the Australian sugarcane industry introduced the Six-Easy-Step Nutrient Management Guidelines. To apply these guidelines, a labour-intensive high-density soil sampling is typically required at the field level, followed by expensive laboratory analysis, spanning the myriad of biological, physical, and chemical properties of soils that need to be determined. To assist in sampling site selection, remote (e.g., Landsat-8, Sentinel-2, and DEM-based terrain attributes) and/or proximal sensing (e.g., electromagnetic [EM] induction and gamma-ray [γ-ray] spectrometry) digital data are increasingly being used. Moreover, the soil and digital data can be modelled using geostatistical (e.g., ordinary kriging [OK]), linear (e.g., linear mixed model [LMM]), machine learning (e.g., random forest [RF], quantile regression forest [QRF], support vector machine [SVM], and Cubist) and hybrid (e.g., RFRK, SVMRK, and CubistRK) approaches to enable prediction of soil properties from the rich source of digital data. However, there are many questions that need to be answered to determine appropriate recommendations including but not limited to i) which modelling approach is optimal, ii) which source of digital data is optimal and does fusion of various sources of digital data improve prediction accuracy, iii) which methods can be used to combine these digital data, iv) what is a minimum number of samples to establish a suitable calibration, v) which soil sampling designs could be used, and vi) what approaches are available to enable prediction of soil properties at various depths simultaneously? In this thesis, Chapter 1 introduces the research questions and defines the problems facing the Australian Sugarcane Industry in terms of the applications of the Six-Easy-Steps Nutrient Management Guidelines, research aims and thesis structure. Chapter 2 is a systematic literature review on various facets of DSM, which includes digital and soil data, models and outputs, and their application across various spatial scales and properties. In Chapter 3, prediction of topsoil (0-0.3 m) SOC is examined in the context of comparing predictive models (i.e., geostatistical, linear, machine learning [ML], and hybrid) using various digital data (i.e., remote [Landsat-8] and proximal sensors [EM and γ-ray]) either individually or in combination and determining minimum number of calibration samples. Chapter 4 shows to predict top- (0-0.3 m) and subsoil (0.6-0.9 m) Ca and Mg, various sampling designs (simple random [SRS], spatial coverage [SCS], feature space coverage [FSCS], and conditioned Latin hypercube sampling [cLHS]) were assessed, with different modelling approaches (i.e., OK, LMM, QRF, SVM, and CubistRK) and calibration sample size effect evaluated, using a combination of proximal data (EM and γ-ray) and terrain (e.g., elevation, slope, and aspect, etc.) attributes. Chapter 5 shows to enable the three-dimensional mapping of CEC and pH at topsoil (0-0.3 m), subsurface (0.3-0.6 m), shallow- (0.6-0.9 m) and deep-subsoil (0.9-1.2 m), an equal-area spline depth function can be used, with remote (Sentinel-2) and proximal data (EM and γ-ray) used alone or fused together, and various fusion methods (i.e., concatenation, simple averaging [SA], Bates-Granger averaging [BGA], Granger-Ramanathan averaging [GRA], and bias-corrected eigenvector averaging [BC-EA]) investigated. Chapter 6 explored the synergistic use of proximal (EM and γ-ray), and time-series of remote data (Landsat-8 and Sentinel-2) to map top- (0-0.15 m) and subsoil (0.30-0.45 m) ESP. The results show that, across these case studies, hybrid and ML models generally achieved higher prediction accuracy. The fusion of remote and proximal data produced better predictions, compared to single source of sensors. Granger-Ramanathan averaging (GRA) and concatenation were the most effective methods to combine digital data. A minimum of less than 1 sample ha-1 would be required to calibrate a good predictive model. There were differences in prediction accuracy amongst the sampling designs. The application of depth function splines enables the simultaneous mapping of soil properties from various depths. The produced DSM of soil properties can be used to inform farmers of spatial variability of soils and enable them to precisely apply fertilisers and/or ameliorants based on the Six-Easy-Step Nutrient Management Guidelines
    corecore