8,812 research outputs found

    Deep generative models for network data synthesis and monitoring

    Get PDF
    Measurement and monitoring are fundamental tasks in all networks, enabling the down-stream management and optimization of the network. Although networks inherently have abundant amounts of monitoring data, its access and effective measurement is another story. The challenges exist in many aspects. First, the inaccessibility of network monitoring data for external users, and it is hard to provide a high-fidelity dataset without leaking commercial sensitive information. Second, it could be very expensive to carry out effective data collection to cover a large-scale network system, considering the size of network growing, i.e., cell number of radio network and the number of flows in the Internet Service Provider (ISP) network. Third, it is difficult to ensure fidelity and efficiency simultaneously in network monitoring, as the available resources in the network element that can be applied to support the measurement function are too limited to implement sophisticated mechanisms. Finally, understanding and explaining the behavior of the network becomes challenging due to its size and complex structure. Various emerging optimization-based solutions (e.g., compressive sensing) or data-driven solutions (e.g. deep learning) have been proposed for the aforementioned challenges. However, the fidelity and efficiency of existing methods cannot yet meet the current network requirements. The contributions made in this thesis significantly advance the state of the art in the domain of network measurement and monitoring techniques. Overall, we leverage cutting-edge machine learning technology, deep generative modeling, throughout the entire thesis. First, we design and realize APPSHOT , an efficient city-scale network traffic sharing with a conditional generative model, which only requires open-source contextual data during inference (e.g., land use information and population distribution). Second, we develop an efficient drive testing system — GENDT, based on generative model, which combines graph neural networks, conditional generation, and quantified model uncertainty to enhance the efficiency of mobile drive testing. Third, we design and implement DISTILGAN, a high-fidelity, efficient, versatile, and real-time network telemetry system with latent GANs and spectral-temporal networks. Finally, we propose SPOTLIGHT , an accurate, explainable, and efficient anomaly detection system of the Open RAN (Radio Access Network) system. The lessons learned through this research are summarized, and interesting topics are discussed for future work in this domain. All proposed solutions have been evaluated with real-world datasets and applied to support different applications in real systems

    RLS-LCD : an efficient Loop Closure Detection for Rotary-LiDAR Scans

    Get PDF

    Meta-learning algorithms and applications

    Get PDF
    Meta-learning in the broader context concerns how an agent learns about their own learning, allowing them to improve their learning process. Learning how to learn is not only beneficial for humans, but it has also shown vast benefits for improving how machines learn. In the context of machine learning, meta-learning enables models to improve their learning process by selecting suitable meta-parameters that influence the learning. For deep learning specifically, the meta-parameters typically describe details of the training of the model but can also include description of the model itself - the architecture. Meta-learning is usually done with specific goals in mind, for example trying to improve ability to generalize or learn new concepts from only a few examples. Meta-learning can be powerful, but it comes with a key downside: it is often computationally costly. If the costs would be alleviated, meta-learning could be more accessible to developers of new artificial intelligence models, allowing them to achieve greater goals or save resources. As a result, one key focus of our research is on significantly improving the efficiency of meta-learning. We develop two approaches: EvoGrad and PASHA, both of which significantly improve meta-learning efficiency in two common scenarios. EvoGrad allows us to efficiently optimize the value of a large number of differentiable meta-parameters, while PASHA enables us to efficiently optimize any type of meta-parameters but fewer in number. Meta-learning is a tool that can be applied to solve various problems. Most commonly it is applied for learning new concepts from only a small number of examples (few-shot learning), but other applications exist too. To showcase the practical impact that meta-learning can make in the context of neural networks, we use meta-learning as a novel solution for two selected problems: more accurate uncertainty quantification (calibration) and general-purpose few-shot learning. Both are practically important problems and using meta-learning approaches we can obtain better solutions than the ones obtained using existing approaches. Calibration is important for safety-critical applications of neural networks, while general-purpose few-shot learning tests model's ability to generalize few-shot learning abilities across diverse tasks such as recognition, segmentation and keypoint estimation. More efficient algorithms as well as novel applications enable the field of meta-learning to make more significant impact on the broader area of deep learning and potentially solve problems that were too challenging before. Ultimately both of them allow us to better utilize the opportunities that artificial intelligence presents

    Self-supervised learning for transferable representations

    Get PDF
    Machine learning has undeniably achieved remarkable advances thanks to large labelled datasets and supervised learning. However, this progress is constrained by the labour-intensive annotation process. It is not feasible to generate extensive labelled datasets for every problem we aim to address. Consequently, there has been a notable shift in recent times toward approaches that solely leverage raw data. Among these, self-supervised learning has emerged as a particularly powerful approach, offering scalability to massive datasets and showcasing considerable potential for effective knowledge transfer. This thesis investigates self-supervised representation learning with a strong focus on computer vision applications. We provide a comprehensive survey of self-supervised methods across various modalities, introducing a taxonomy that categorises them into four distinct families while also highlighting practical considerations for real-world implementation. Our focus thenceforth is on the computer vision modality, where we perform a comprehensive benchmark evaluation of state-of-the-art self supervised models against many diverse downstream transfer tasks. Our findings reveal that self-supervised models often outperform supervised learning across a spectrum of tasks, albeit with correlations weakening as tasks transition beyond classification, particularly for datasets with distribution shifts. Digging deeper, we investigate the influence of data augmentation on the transferability of contrastive learners, uncovering a trade-off between spatial and appearance-based invariances that generalise to real-world transformations. This begins to explain the differing empirical performances achieved by self-supervised learners on different downstream tasks, and it showcases the advantages of specialised representations produced with tailored augmentation. Finally, we introduce a novel self-supervised pre-training algorithm for object detection, aligning pre-training with downstream architecture and objectives, leading to reduced localisation errors and improved label efficiency. In conclusion, this thesis contributes a comprehensive understanding of self-supervised representation learning and its role in enabling effective transfer across computer vision tasks

    Workflow to detect ship encounters at sea with GIS support

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Geographic Information Systems and ScienceAccording to the United Nations, more than 80% of the global trade is currently transported by sea. The Portuguese EEZ has a very extensive area with high maritime traffic, among which illicit activities may occur. This work aims to contribute to the official control of illegal transshipment actions by studying and proposing a new way of detecting encounters between ships. Ships with specific characteristics use an Automatic Identification System (AIS) on board which transmits a signal via radio frequencies, allowing shore stations to receive static and dynamic data from the ship. Thus, there is an increase in maritime situational awareness and, consequently, in the safety of navigation. The methodology of this dissertation employs monthly and daily AIS data in the study area, which is located in southern mainland Portugal. A bibliometric and content analysis was performed in order to assess the state of the art concerning geospatial analysis models of maritime traffic, based on AIS data, and focus on anomalous behaviour detection. Maritime traffic density maps were created with the support of a GIS (QGIS software), which allowed to characterize the maritime traffic in the study area and, subsequently, to pattern the locations where ship encounters occur. The algorithm to detect ship-to-ship meetings at sea was developed using a rule-based methodology. After analysis and discussion of results, it was found that the areas where the possibility of ship encounters at sea is greatest are away from the main shipping lanes, but close to areas with fishing vessels. The study findings and workflow are useful for decision making by the competent authorities for patrolling the maritime areas, focusing on the detection of illegal transhipment actions.Segundo as Nações Unidas, mais de 80% do comércio global é, atualmente, transportado por via marítima. A ZEE portuguesa tem uma área muito extensa, com tráfego marítimo elevado, entre o qual podem ocorrer atividades ilícitas. Este trabalho pretende contribuir para o controlo oficial de ações de transbordo ilegal, estudando e propondo uma nova forma de deteção de encontros entre navios. Os navios com determinadas características, utilizam a bordo um Automatic Identification System (AIS) que transmite sinal através de frequências rádio, permitindo que estações em terra recebam dados estáticos e dinâmicos do navio. Deste modo, verifica-se um aumento do conhecimento situacional marítimo e, consequentemente, da segurança da navegação. Foi realizada uma análise bibliométrica e de conteúdo a fim de avaliar o estado da arte referente a modelos de análise geoespacial do tráfego marítimo, com base em dados AIS, e foco na deteção de comportamentos anómalos. Na metodologia desta dissertação, são utilizados dados AIS mensais e diários na área de estudo, situada a sul de Portugal Continental. Foram criados mapas de densidade de tráfego marítimo com o apoio de um SIG (software QGIS), o que permitiu caracterizar o tráfego marítimo na área de estudo e, posteriormente, padronizar os locais onde ocorrem encontros entre navios. O algoritmo para detetar encontros entre navios no mar foi desenvolvido através de uma metodologia baseada em regras. Após análise e discussão de resultados, constatou-se que as áreas onde a possibilidade de ocorrer encontros de navios no mar é maior, encontram-se afastadas dos corredores principais de navegação, mas próximas de zonas com embarcações de pesca. Os resultados do estudo e o workflow desenvolvidos são úteis à tomada de decisão pelas autoridades competentes por patrulhar as áreas marítimas, com incidência na deteção de ações de transbordo ilegal

    Combined Nutrition and Exercise Interventions in Community Groups

    Get PDF
    Diet and physical activity are two key modifiable lifestyle factors that influence health across the lifespan (prevention and management of chronic diseases and reduction of the risk of premature death through several biological mechanisms). Community-based interventions contribute to public health, as they have the potential to reach high population-level impact, through the focus on groups that share a common culture or identity in their natural living environment. While the health benefits of a balanced diet and regular physical activity are commonly studied separately, interventions that combine these two lifestyle factors have the potential to induce greater benefits in community groups rather than strategies focusing only on one or the other. Thus, this Special Issue entitled “Combined Nutrition and Exercise Interventions in Community Groups” is comprised of manuscripts that highlight this combined approach (balanced diet and regular physical activity) in community settings. The contributors to this Special Issue are well-recognized professionals in complementary fields such as education, public health, nutrition, and exercise. This Special Issue highlights the latest research regarding combined nutrition and exercise interventions among different community groups and includes research articles developed through five continents (Africa, Asia, America, Europe and Oceania), as well as reviews and systematic reviews

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Medical Image Analysis using Deep Relational Learning

    Full text link
    In the past ten years, with the help of deep learning, especially the rapid development of deep neural networks, medical image analysis has made remarkable progress. However, how to effectively use the relational information between various tissues or organs in medical images is still a very challenging problem, and it has not been fully studied. In this thesis, we propose two novel solutions to this problem based on deep relational learning. First, we propose a context-aware fully convolutional network that effectively models implicit relation information between features to perform medical image segmentation. The network achieves the state-of-the-art segmentation results on the Multi Modal Brain Tumor Segmentation 2017 (BraTS2017) and Multi Modal Brain Tumor Segmentation 2018 (BraTS2018) data sets. Subsequently, we propose a new hierarchical homography estimation network to achieve accurate medical image mosaicing by learning the explicit spatial relationship between adjacent frames. We use the UCL Fetoscopy Placenta dataset to conduct experiments and our hierarchical homography estimation network outperforms the other state-of-the-art mosaicing methods while generating robust and meaningful mosaicing result on unseen frames.Comment: arXiv admin note: substantial text overlap with arXiv:2007.0778

    Unbounded Differentially Private Quantile and Maximum Estimation

    Full text link
    In this work we consider the problem of differentially private computation of quantiles for the data, especially the highest quantiles such as maximum, but with an unbounded range for the dataset. We show that this can be done efficiently through a simple invocation of AboveThreshold\texttt{AboveThreshold}, a subroutine that is iteratively called in the fundamental Sparse Vector Technique, even when there is no upper bound on the data. In particular, we show that this procedure can give more accurate and robust estimates on the highest quantiles with applications towards clipping that is essential for differentially private sum and mean estimation. In addition, we show how two invocations can handle the fully unbounded data setting. Within our study, we show that an improved analysis of AboveThreshold\texttt{AboveThreshold} can improve the privacy guarantees for the widely used Sparse Vector Technique that is of independent interest. We give a more general characterization of privacy loss for AboveThreshold\texttt{AboveThreshold} which we immediately apply to our method for improved privacy guarantees. Our algorithm only requires one O(n)O(n) pass through the data, which can be unsorted, and each subsequent query takes O(1)O(1) time. We empirically compare our unbounded algorithm with the state-of-the-art algorithms in the bounded setting. For inner quantiles, we find that our method often performs better on non-synthetic datasets. For the maximal quantiles, which we apply to differentially private sum computation, we find that our method performs significantly better

    A Benchmark Comparison of Visual Place Recognition Techniques for Resource-Constrained Embedded Platforms

    Get PDF
    Autonomous navigation has become a widely researched area of expertise over the past few years, gaining a massive following due to its necessity in creating a fully autonomous robotic system. Autonomous navigation is an exceedingly difficult task to accomplish in and of itself. Successful navigation relies heavily on the ability to self-localise oneself within a given environment. Without this awareness of one’s own location, it is impossible to successfully navigate in an autonomous manner. Since its inception Simultaneous Localization and Mapping (SLAM) has become one of the most widely researched areas of autonomous navigation. SLAM focuses on self-localization within a mapped or un-mapped environment, and constructing or updating the map of one’s surroundings. Visual Place Recognition (VPR) is an essential part of any SLAM system. VPR relies on visual cues to determine one’s location within a mapped environment. This thesis presents two main topics within the field of VPR. First, this thesis presents a benchmark analysis of several popular embedded platforms when performing VPR. The presented benchmark analyses six different VPR techniques across three different datasets, and investigates accuracy, CPU usage, memory usage, processing time and power consumption. The benchmark demonstrated a clear relationship between platform architecture and the metrics measured, with platforms of the same architecture achieving comparable accuracy and algorithm efficiency. Additionally, the Raspberry Pi platform was noted as a standout in terms of algorithm efficiency and power consumption. Secondly, this thesis proposes an evaluation framework intended to provide information about a VPR technique’s useability within a real-time application. The approach makes use of the incoming frame rate of an image stream and the VPR frame rate, the rate at which the technique can perform VPR, to determine how efficient VPR techniques would be in a real-time environment. This evaluation framework determined that CoHOG would be the most effective algorithm to be deployed in a real-time environment as it had the best ratio between computation time and accuracy
    corecore