4,766 research outputs found

    Context-Aware Self-Healing for Small Cell Networks

    Get PDF
    These can be an invaluable source of information for the management of the network, in a way that we have denominated as context-aware SON, which is the approach proposed in this thesis. To develop this concept, the thesis follows a top-down approach. Firstly, the characteristics of the cellular deployments are assessed, especially for indoor small cell networks. In those scenarios, the need for context-aware SON is evaluated and considered indispensable. Secondly, a new cellular architecture is defined to integrate both context information and SON mechanisms in the management plane of the mobile network. Thus, the specifics of making context an integral part of cellular OAM/SON are defined. Also, the real-world implementation of the architecture is proposed. Thirdly, from the established general SON architecture, a logical self-healing framework is defined to support the context-aware healing mechanisms to be developed. Fourthly, different self-healing algorithms are defined depending on the failures to be managed and the conditions of the considered scenario. The mechanisms are based on probabilistic analysis, making use of both context and network data for detection and diagnosis of cellular issues. The conditions for the implementation of these methods are assessed. Their applicability is evaluated by means of simulators and testbed trials. The results show important improvements in performance and capabilities in comparison to previous methods, demonstrating the relevance of the proposed approach.The last years have seen a continuous increase in the use of mobile communications. To cope with the growing traffic, recently deployed technologies have deepened the adoption of small cells (low powered base stations) to serve areas with high demand or coverage issues, where macrocells can be both unsuccessful or inefficient. Also, new cellular and non-cellular technologies (e.g. WiFi) coexist with legacy ones, including also multiple deployment schemes (macrocell, small cells), in what is known as heterogeneous networks (HetNets). Due to the huge complexity of HetNets, their operation, administration and management (OAM) became increasingly difficult. To overcome this, the NGMN Alliance and the 3GPP defined the Self-Organizing Network (SON) paradigm, aiming to automate the OAM procedures to reduce their costs and increase the resulting performance. One key focus of SON is the self-healing of the network, covering the automatic detection of problems, the diagnosis of their causes, their compensation and their recovery. Until recently, SON mechanisms have been solely based on the analysis of alarms and performance indicators. However, on the one hand, this approach has become very limited given the complexity of the scenarios, and particularly in indoor cellular environments. Here, the deployment of small cells, their coexistence with multiple telecommunications systems and the nature of those environments (in terms of propagation, coverage overlapping, fast demand changes and users' mobility) introduce many challenges for classic SON. On the other hand, modern user equipment (e.g. smartphones), equipped with powerful processors, sensors and applications, generate a huge amount of context information. Context refers to those variables not directly associated with the telecommunication service, but with the terminals and their environment. This includes the user's position, applications, social data, etc

    Machine Learning and Deep Learning for the Built Heritage Analysis: Laser Scanning and UAV-Based Surveying Applications on a Complex Spatial Grid Structure

    Get PDF
    The reconstruction of 3D geometries starting from reality-based data is challenging and time-consuming due to the difficulties involved in modeling existing structures and the complex nature of built heritage. This paper presents a methodological approach for the automated segmentation and classification of surveying outputs to improve the interpretation and building information modeling from laser scanning and photogrammetric data. The research focused on the surveying of reticular, space grid structures of the late 19th–20th–21st centuries, as part of our architectural heritage, which might require monitoring maintenance activities, and relied on artificial intelligence (machine learning and deep learning) for: (i) the classification of 3D architectural components at multiple levels of detail and (ii) automated masking in standard photogrammetric processing. Focusing on the case study of the grid structure in steel named La Vela in Bologna, the work raises many critical issues in space grid structures in terms of data accuracy, geometric and spatial complexity, semantic classification, and component recognition

    Multi-modal and multi-dimensional biomedical image data analysis using deep learning

    Get PDF
    There is a growing need for the development of computational methods and tools for automated, objective, and quantitative analysis of biomedical signal and image data to facilitate disease and treatment monitoring, early diagnosis, and scientific discovery. Recent advances in artificial intelligence and machine learning, particularly in deep learning, have revolutionized computer vision and image analysis for many application areas. While processing of non-biomedical signal, image, and video data using deep learning methods has been very successful, high-stakes biomedical applications present unique challenges such as different image modalities, limited training data, need for explainability and interpretability etc. that need to be addressed. In this dissertation, we developed novel, explainable, and attention-based deep learning frameworks for objective, automated, and quantitative analysis of biomedical signal, image, and video data. The proposed solutions involve multi-scale signal analysis for oraldiadochokinesis studies; ensemble of deep learning cascades using global soft attention mechanisms for segmentation of meningeal vascular networks in confocal microscopy; spatial attention and spatio-temporal data fusion for detection of rare and short-term video events in laryngeal endoscopy videos; and a novel discrete Fourier transform driven class activation map for explainable-AI and weakly-supervised object localization and segmentation for detailed vocal fold motion analysis using laryngeal endoscopy videos. Experiments conducted on the proposed methods showed robust and promising results towards automated, objective, and quantitative analysis of biomedical data, that is of great value for potential early diagnosis and effective disease progress or treatment monitoring.Includes bibliographical references

    Evaluation of Interpretability for Deep Learning algorithms in EEG Emotion Recognition: A case study in Autism

    Full text link
    Current models on Explainable Artificial Intelligence (XAI) have shown an evident and quantified lack of reliability for measuring feature-relevance when statistically entangled features are proposed for training deep classifiers. There has been an increase in the application of Deep Learning in clinical trials to predict early diagnosis of neuro-developmental disorders, such as Autism Spectrum Disorder (ASD). However, the inclusion of more reliable saliency-maps to obtain more trustworthy and interpretable metrics using neural activity features is still insufficiently mature for practical applications in diagnostics or clinical trials. Moreover, in ASD research the inclusion of deep classifiers that use neural measures to predict viewed facial emotions is relatively unexplored. Therefore, in this study we propose the evaluation of a Convolutional Neural Network (CNN) for electroencephalography (EEG)-based facial emotion recognition decoding complemented with a novel RemOve-And-Retrain (ROAR) methodology to recover highly relevant features used in the classifier. Specifically, we compare well-known relevance maps such as Layer-Wise Relevance Propagation (LRP), PatternNet, Pattern-Attribution, and Smooth-Grad Squared. This study is the first to consolidate a more transparent feature-relevance calculation for a successful EEG-based facial emotion recognition using a within-subject-trained CNN in typically-developed and ASD individuals

    Deep probabilistic methods for improved radar sensor modelling and pose estimation

    Get PDF
    Radar’s ability to sense under adverse conditions and at far-range makes it a valuable alternative to vision and lidar for mobile robotic applications. However, its complex, scene-dependent sensing process and significant noise artefacts makes working with radar challenging. Moving past classical rule-based approaches, which have dominated the literature to date, this thesis investigates deep and data-driven solutions across a range of tasks in robotics. Firstly, a deep approach is developed for mapping raw sensor measurements to a grid-map of occupancy probabilities, outperforming classical filtering approaches by a significant margin. A distribution over the occupancy state is captured, additionally allowing uncertainty in predictions to be identified and managed. The approach is trained entirely using partial labels generated automatically from lidar, without requiring manual labelling. Next, a deep model is proposed for generating stochastic radar measurements from simulated elevation maps. The model is trained by learning the forward and backward processes side-by-side, using a combination of adversarial and cyclical consistency constraints in combination with a partial alignment loss, using labels generated in lidar. By faithfully replicating the radar sensing process, new models can be trained for down-stream tasks, using labels that are readily available in simulation. In this case, segmentation models trained on simulated radar measurements, when deployed in the real world, are shown to approach the performance of a model trained entirely on real-world measurements. Finally, the potential of deep approaches applied to the radar odometry task are explored. A learnt feature space is combined with a classical correlative scan matching procedure and optimised for pose prediction, allowing the proposed method to outperform the previous state-of-the-art by a significant margin. Through a probabilistic consideration the uncertainty in the pose is also successfully characterised. Building upon this success, properties of the Fourier Transform are then utilised to separate the search for translation and angle. It is shown that this decoupled search results in a significant boost to run-time performance, allowing the approach to run in real-time on CPUs and embedded devices, whilst remaining competitive with other radar odometry methods proposed in the literature

    Advanced Sensing and Image Processing Techniques for Healthcare Applications

    Get PDF
    This Special Issue aims to attract the latest research and findings in the design, development and experimentation of healthcare-related technologies. This includes, but is not limited to, using novel sensing, imaging, data processing, machine learning, and artificially intelligent devices and algorithms to assist/monitor the elderly, patients, and the disabled population
    • …
    corecore