19 research outputs found

    Triplet Losses-based Matrix Factorization for Robust Recommendations

    Get PDF
    Much like other learning-based models, recommender systems can be affected by biases in the training data. While typical evaluation metrics (e.g. hit rate) are not concerned with them, some categories of final users are heavily affected by these biases. In this work, we propose using multiple triplet losses terms to extract meaningful and robust representations of users and items. We empirically evaluate the soundness of such representations through several “bias-aware” evaluation metrics, as well as in terms of stability to changes in the training set and agreement of the predictions variance w.r.t. that of each user

    Machine learning with limited label availability: algorithms and applications

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    RECLAIM: Reverse Engineering Classification Metrics

    Get PDF
    Being able to compare machine learning models in terms of performance is a fundamental part of improving the state of the art in a field. However, there is a risk of getting locked into only using a few -- possibly not ideal -- performance metrics, only for comparability with earlier works. In this work, we explore the possibility of reconstructing new classification metrics starting from what little information may be available in existing works. We propose three approaches to reconstruct confusion matrices and, as a consequence, other classification metrics. We empirically verify the quality of the reconstructions, drawing conclusions on the usefulness that various classification metrics have for the reconstruction task

    Time-of-Flight Cameras in Space: Pose Estimation with Deep Learning Methodologies

    Get PDF
    Recently introduced 3D Time-of-Flight (ToF) cameras have shown a huge potential for mobile robotic applications, proposing a smart and fast technology that outputs 3D point clouds, lacking however in measurement precision and robustness. With the development of this low-cost sensing hardware, 3D perception gathers more and more importance in robotics as well as in many other fields, and object registration continues to gain momentum. Registration is a transformation estimation problem between a source and a target point clouds, seeking to find the transformation that best aligns them. This work aims at building a full pipeline, from data acquisition to transformation identification, to robustly detect known objects observed by a ToF camera within a short range, estimating their 6 degrees of freedom position. We focus this work to demonstrating the capability of detecting a part of a satellite floating in space, to support in-orbit servicing missions (e.g. for space debris removal). Experiments reveal that deep learning techniques can obtain higher accuracy and robustness w.r.t. classical methods, handling significant amount of noise while still keeping real-time performance and low complexity of the models themselves

    Cross-Lingual Propagation of Sentiment Information Based on Bilingual Vector Space Alignment

    Get PDF
    Deep learning methods have shown to be particularly effective in inferring the sentiment polarity of a text snippet. However, in cross-domain and cross-lingual scenarios there is often a lack of training data. To tackle this issue, propagation algorithms can be used to yield sentiment information for various languages and domains by transferring knowledge from a source language(usually English). To propagate polarity scores to the target language, these algorithms take as input an initial vocabulary and a bilingual lexicon. In this paper we propose to enrich lexicon in-formation for cross-lingual propagation by inferring the bilingual semantic relationships from an aligned bilingual vector space.This allows us to exploit the underlying text similarities that are not made explicit by the lexicon. The experiments show that our approach outperforms the state-of-the-art propagation method on multilingual datasets

    Semantic Image Collection Summarization with Frequent Subgraph Mining

    Get PDF
    Applications such as providing a preview of personal albums (e.g., Google Photos) or suggesting thematic collections based on user interests (e.g., Pinterest) require a semantically-enriched image representation, which should be more informative with respect to simple low-level visual features and image tags. To this aim, we propose an image collection summarization technique based on frequent subgraph mining. We represent images with a novel type of scene graphs including fine-grained relationship types between objects. These scene graphs are automatically derived by our method. The resulting summary consists of a set of frequent subgraphs describing the underlying patterns of the image dataset. Our results are interpretable and provide more powerful semantic information with respect to previous techniques, in which the summary is a subset of the collection in terms of images or image patches. The experimental evaluation shows that the proposed technique yields non-redundant summaries, with a high diversity of the discovered patterns

    Dissecting a Data-driven Prognostic Pipeline: A Powertrain use case

    Get PDF
    Nowadays, cars are instrumented with thousands of sensors continuously collecting data about its components. Thanks to the concept of connected cars, this data can be now transferred to the cloud for advanced analytics functionalities, such as prognostic or predictive maintenance. In this paper, we dissect a data-driven prognostic pipeline and apply it in the automotive scenario. Our pipeline is composed of three main steps: (i) selection of most important signals and features describing the scenario for the target problem, (ii) creation of machine learning models based on different classification algorithms, and (iii) selection of the model that works better for a deployment scenario. For the development of the pipeline, we exploit an extensive experimental campaign where an actual engine runs in a controlled test bench under different working conditions. We aim to predict failures of the High-Pressure Fuel System, a key part of the diesel engine responsible for delivering high-pressure fuel to the cylinders for combustion. Our results show the advantage of data-driven solutions to automatically discover the most important signals to predict failures of the High-Pressure Fuel System. We also highlight how an accurate model selection step is fundamental to identify a robust model suitable for deployment

    Silicon sensors with resistive read-out: Machine Learning techniques for ultimate spatial resolution

    Full text link
    Resistive AC-coupled Silicon Detectors (RSDs) are based on the Low Gain Avalanche Diode (LGAD) technology, characterized by a continuous gain layer, and by the innovative introduction of resistive read-out. Thanks to a novel electrode design aimed at maximizing signal sharing, RSD2, the second RSD production by Fondazione Bruno Kessler (FBK), achieves a position resolution on the whole pixel surface of about 8 ÎĽm\mu m for 200-ÎĽm\mu m pitch. RSD2 arrays have been tested using a Transient Current Technique setup equipped with a 16-channel digitizer, and results on spatial resolution have been obtained with machine learning algorithms.Comment: 2 pages, 2 figur

    First experimental results of the spatial resolution of RSD pad arrays read out with a 16-ch board

    Get PDF
    Resistive Silicon Detectors (RSD, also known as AC-LGAD) are innovative silicon sensors, based on the LGAD technology, characterized by a continuous gain layer that spreads across the whole sensor active area. RSDs are very promising tracking detectors, thanks to the combination of the built-in signal sharing with the internal charge multiplication, which allows large signals to be seen over multiple read-out channels. This work presents the first experimental results obtained from a 3Ă—\times4 array with 200~\mum~pitch, coming from the RSD2 production manufactured by FBK, read out with a 16-ch digitizer. A machine learning model has been trained, with experimental data taken with a precise TCT laser setup, and then used to predict the laser shot positions, finding a spatial resolution of ~ 5.5 um

    Fast Self-Organizing Maps Training

    No full text
    Self-organizing maps are an unsupervised machine learning technique that offers interpretable results by identifying topological properties in high-dimensional datasets and projecting them on a 2-dimensional grid. An important problem of self-organizing maps is the computational expensiveness of their training phase. In this paper, we propose a fast approach to train self-organizing maps. The approach consists of 2 steps. First, a small map identifies the most relevant areas from the entire high-dimensional input space. Then a larger map (initialized from the small one) is fine-tuned to further explore the local areas identified in the first step. The resulting map has performance (measured in terms of accuracy and quantization error) on par with self-organizing maps trained with the standard approach, but with a significantly reduced training time
    corecore