4,906 research outputs found

    Analysis of LTE-A Heterogeneous Networks with SIR-based Cell Association and Stochastic Geometry

    Full text link
    This paper provides an analytical framework to characterize the performance of Heterogeneous Networks (HetNets), where the positions of base stations and users are modeled by spatial Poisson Point Processes (stochastic geometry). We have been able to formally derive outage probability, rate coverage probability, and mean user bit-rate when a frequency reuse of KK and a novel prioritized SIR-based cell association scheme are applied. A simulation approach has been adopted in order to validate our analytical model; theoretical results are in good agreement with simulation ones. The results obtained highlight that the adopted cell association technique allows very low outage probability and the fulfillment of certain bit-rate requirements by means of adequate selection of reuse factor and micro cell density. This analytical model can be adopted by network operators to gain insights on cell planning. Finally, the performance of our SIR-based cell association scheme has been validated through comparisons with other schemes in literature.Comment: Paper accepted to appear on the Journal of Communication Networks (accepted on November 28, 2017); 15 page

    Hybridizing metric learning and case-based reasoning for adaptable clickbait detection.

    Get PDF
    [EN]The term clickbait is usually used to name web contents which are specifically designed to maximize advertisement monetization, often at the expense of quality and exactitude. The rapid proliferation of this type of content has motivated researchers to develop automatic detection methods, to effectively block clickbaits in different application domains. In this paper, we introduce a novel clickbait detection method. Our approach leverages state-of-the-art techniques from the fields of deep learning and metric learning, integrating them into the Case-Based Reasoning methodology. This provides the model with the ability to learn-over-time, adapting to different users’ criteria. Our experimental results also evidence that the proposed approach outperforms previous clickbait detection methods by a large margin

    Unsupervised Learning of Edges

    Full text link
    Data-driven approaches for edge detection have proven effective and achieve top results on modern benchmarks. However, all current data-driven edge detectors require manual supervision for training in the form of hand-labeled region segments or object boundaries. Specifically, human annotators mark semantically meaningful edges which are subsequently used for training. Is this form of strong, high-level supervision actually necessary to learn to accurately detect edges? In this work we present a simple yet effective approach for training edge detectors without human supervision. To this end we utilize motion, and more specifically, the only input to our method is noisy semi-dense matches between frames. We begin with only a rudimentary knowledge of edges (in the form of image gradients), and alternate between improving motion estimation and edge detection in turn. Using a large corpus of video data, we show that edge detectors trained using our unsupervised scheme approach the performance of the same methods trained with full supervision (within 3-5%). Finally, we show that when using a deep network for the edge detector, our approach provides a novel pre-training scheme for object detection.Comment: Camera ready version for CVPR 201

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions
    • …
    corecore