13 research outputs found

    Threshold Choice Methods: the Missing Link

    Full text link
    Many performance metrics have been introduced for the evaluation of classification performance, with different origins and niches of application: accuracy, macro-accuracy, area under the ROC curve, the ROC convex hull, the absolute error, and the Brier score (with its decomposition into refinement and calibration). One way of understanding the relation among some of these metrics is the use of variable operating conditions (either in the form of misclassification costs or class proportions). Thus, a metric may correspond to some expected loss over a range of operating conditions. One dimension for the analysis has been precisely the distribution we take for this range of operating conditions, leading to some important connections in the area of proper scoring rules. However, we show that there is another dimension which has not received attention in the analysis of performance metrics. This new dimension is given by the decision rule, which is typically implemented as a threshold choice method when using scoring models. In this paper, we explore many old and new threshold choice methods: fixed, score-uniform, score-driven, rate-driven and optimal, among others. By calculating the loss of these methods for a uniform range of operating conditions we get the 0-1 loss, the absolute error, the Brier score (mean squared error), the AUC and the refinement loss respectively. This provides a comprehensive view of performance metrics as well as a systematic approach to loss minimisation, namely: take a model, apply several threshold choice methods consistent with the information which is (and will be) available about the operating condition, and compare their expected losses. In order to assist in this procedure we also derive several connections between the aforementioned performance metrics, and we highlight the role of calibration in choosing the threshold choice method

    Bayesian inference of mesoscale mechanical properties of mortar using experimental data from a double shear test

    Get PDF
    In this work, we propose Bayesian parameter estimation of a nonlinear mechanics based model describing the behaviour of mortar subjected to double shear test with externally bonded carbon fibre reinforced polymer (CFRP) plates. With the Bayesian approach, it is possible to identify mechanical material parameters of different phases of the mortar mesostructure, i.e. hardened cement paste, aggregates and interface transition zone (ITZ). Due to nonlinearity of the concerned problem, we use a novel sequential approach for the parameter inference, which does not require coupling between the finite element solver and software for the stochastic analysis. The model geometry and material mesostructure are learned based on micro-computed tomography (μCT) scans of the real specimen, whereas the unknown boundary conditions are assumed to be uncertain and are also identified from experimental data. Mortar is modelled through a discrete lattice model consisting of spatial Timoshenko beams with embedded discontinuities. The latter allows the description of the distinct stages in material degradation, from the appearance of microscopic material damage to its evolution into macroscopic cracks leading to localised failure.</p

    A framework for the analytical and visual interpretation of complex spatiotemporal dynamics in soccer

    Get PDF
    Pla de Doctorat Industrial de la Generalitat de CatalunyaSports analytics is an emerging field focused on the application of advanced data analysis for assessing the performance of professional athletes and teams. In soccer, the integration of data analysis is in its initial steps, primarily due to the difficulty of making sense of soccer's complex spatiotemporal relationships and effectively translating findings to practitioners. Recently, the availability of spatiotemporal data has given rise to applying statistical approaches to address problems such as estimating passing and scoring probability, or the evaluation of players' mental pressure. However, most of these approaches focus on isolated aspects of the sport, while coaches tend to focus on the broader interplay of all 22 players on the pitch. To address the non-stop flow of questions that coaching staff deal with daily, we identify the need for a flexible analysis framework that allows us to answer these questions quickly, accurately, and in a visually-interpretable way while capturing the complex spatial and contextual factors that rule the game. We propose developing such a comprehensive framework through the concept of the expected possession value (EPV). First introduced in basketball, EPV constitutes an instantaneous estimate of the expected points to be scored at the end of a possession. However, aside from a shared high-level goal, our focus on soccer necessitates a drastically different approach to account for the sport's nuances, such as looser notions of possession, the ability of passes to happen at any location, and space-time dependent turnover evaluation. Following this, we propose modeling EPV in soccer by addressing the question, "can we estimate the expectation of a team scoring or conceding the next goal at any time in the game?" From here, we address a series of derived interrogations, such as how should the EPV expression be structured so coaches can more easily interpret it? Can we produce calibrated and interpretable estimates for each of its components? Can we develop representative and soccer-specific features with the aid of coaches? Is it possible to learn complex features from raw level spatiotemporal data? Finally, and most importantly, can we produce compelling practical applications? These questions are successfully addressed in this thesis, where we present a series of contributions for both the machine learning and soccer analytics fields related to the modeling and practical interpretation of complex spatiotemporal dynamics. We propose a decomposed modeling approach where a series of foundational soccer components can be estimated separately and then merged to provide a single EPV estimation, providing flexibility to this integrated model. From a practical standpoint, we leverage several function approximation approaches to exploit complex relationships in spatiotemporal tracking data. An essential contribution of this work is the proposal of SoccerMap, a flexible deep learning architecture capable of producing accurate and visually-interpretable probability surfaces in a broad range of problems. Based on a large set of spatial and contextual features developed, we model and provide accurate estimates for each of the components of the EPV components. The flexibility and interpretation capabilities of the proposed model allow us to produce a broad set of practical applications related to on-ball performance, off-ball performance, and match analysis in soccer, and open the door for its future adaption to other sports. This thesis was developed under an Industrial Ph.D. program and carried out entirely at Fútbol Club Barcelona, which promoted a close collaboration with professional coaches. As a result, a vast part of the ideas developed in this thesis is now part of the club's daily player and team performance analysis pipeline.Sports analytics es una área de investigación de gran crecimiento y que se encuentra enfocada en la aplicación de análisis avanzado de datos para la evaluación del rendimiento de equipos y deportistas profesionales. En el fútbol, la integración del análisis de datos se encuentra en una etapa incipiente, principalmente dado la dificultad de evaluar los complejos factores espacio-temporales del juego, y de traducir los hallazgos al lenguaje de los entrenadores. La reciente disponibilidad de datos espacio-temporales ha dado pie a la aplicación de métodos estadísticos para explorar problemas tales como la estimación de la probabilidad de pasar o rematar exitosamente, o la evaluación de la presión mental durante el juego, entre muchos otros. Sin embargo, la mayoría de los estudios hasta la fecha se han enfocado en aspectos aislados del juego, mientras que el análisis de los entrenadores suele tomar una óptica más integral en la que considera la interacción de los 22 jugadores en el campo. En base a todo esto, identificamos la necesidad de contar con un completo sistema (framework) de análisis que permite responder al contínuo flujo de preguntas de los cuerpos técnicos de forma ágil y visualmente interpretable, y que al mismo tiempo permita capturar los complejos fenómenos espaciales y contextuales que rigen al fútbol. Proponemos el desarrollo de este sistema a través del concepto del valor esperado de la posesión (EPV, por sus siglas en inglés). El EPV, que fue introducido inicialmente en el baloncesto, constituye la estimación segundo a segundo de los puntos que se esperan obtener al final de una posesión de balón. Sin embargo, su adaptación al fútbol requiere de un enfoque completamente diferente para poder captar conceptos esenciales tales como que los pases pueden ir a cualquier ubicación en el campo, una definición menos rígida de la posesión de balón, y los efectos de perder el balón de acuerdo al espacio y tiempo en que este ocurre. En base esto, proponemos modelar el EPV enfocándonos en responder la siguiente pregunta ¿podemos estimar la esperanza de que un equipo marque o reciba el próximo gol, en cualquier instante del partido? A partir de aquí, desarrollamos una serie de preguntas derivadas relacionadas con la capacidad de proveer flexibilidad e interpretabilidad a nuestro modelo, así como desarrollar aplicaciones prácticas de forma ágil. Estas interrogantes son desarrolladas con éxito en esta tesis, donde presentamos una serie de contribuciones tanto al área de machine learning como a la de sports analytics. Proponemos un novedoso enfoque en el que se descompone el EPV en una serie de componentes esenciales, que pueden ser estimados de forma separada y luego integrados para producir una estimación única del EPV, dotando de mayor flexibilidad a este modelo integrado. Desde un punto de vista práctico, nos apoyamos en una serie de métodos de aproximación de funciones para sacar provecho de relaciones complejas en datos espacio-temporales de tracking. Derivado de esto, proponemos SoccerMap, una flexible arquitectura de deep learning capaz de producir superficies de probabilidad precisas y visualmente interpretables. Adicionalmente, nos apoyamos en una larga serie de variables espaciales y contextuales, desarrolladas en este trabajo, para modelar y proveer estimaciones acuradas de cada uno de los componentes del EPV. La flexibilidad de este modelo nos permite producir una vasta cantidad de aplicaciones prácticas relacionadas al rendimiento con y sin balón, y al análisis de partidos en fútbol, y marca un camino para su integración en otros deportes. Esta tesis fue desarrollada con el apoyo del Plan de Doctorados Industriales del Departamento de Investigación y Universidades de la Generalitat de Catalunya, y llevado a cabo en el Fútbol Club Barcelona, contando con la colaboración de entrenadores y profesionales del club.Postprint (published version

    A piRNA regulation landscape in C. elegans and a computational model to predict gene functions

    Get PDF
    Investigating mechanisms that regulate genes and the genes' functions are essential to understand a biological system. This dissertation is consists of two specific research projects under these aims, which are for understanding piRNA's regulation mechanism and predicting genes' function computationally. The first project shows a piRNA regulation landscape in C. elegans. piRNAs (Piwi-interacting small RNAs) form a complex with Piwi Argonautes to maintain fertility and silence transposons in animal germlines. In C. elegans, previous studies have suggested that piRNAs tolerate mismatched pairing and in principle could target all transcripts. In this project, by computationally analyzing the chimeric reads directly captured by cross-linking piRNA and their targets in vivo, piRNAs are found to target all germline mRNAs with microRNA-like pairing rules. The number of targeting chimeric reads correlates better with binding energy than with piRNA abundance, suggesting that piRNA concentration does not limit targeting. Further more, in mRNAs silenced by piRNAs, secondary small RNAs are found to be accumulating at the center and ends of piRNA binding sites. Whereas in germline-expressed mRNAs, reduced piRNA binding density and suppression of piRNA-associated secondary small RNAs targeting correlate with the CSR-1 Argonaute presence. These findings reveal physiologically important and nuanced regulation of piRNA targets and provide evidence for a comprehensive post-transcriptional regulatory step in germline gene expression. The second project elaborates a computational model to predict gene function. Predicting genes involved in a biological function facilitates many kinds of research, such as prioritizing candidates in a screening project. Following the “Guilt By Association” principle, multiple datasets are considered as biological networks and integrated together under a multi-label learning framework for predicting gene functions. Specifically, the functional labels are propagated and smoothed using a label propagation method on the networks and then integrated using an “Error correction of code” multi-label learning framework, where a “codeword” defines all the labels annotated to a specific gene. The model is then trained by finding the optimal projections between the code matrix and the biological datasets using canonical correlation analysis. Its performance is benchmarked by comparing to a state-of-art algorithm and a large scale screen results for piRNA pathway genes in D.melanogaster. Finally, piRNA targeting's roles in epigenetics and physiology and its cross-talk with CSR-1 pathway are discussed, together with a survey of additional biological datasets and a discussion of benchmarking methods for the gene function prediction

    A unified view of performance metrics: translating threshold choice into expected classification loss

    Full text link
    [EN] Many performance metrics have been introduced in the literature for the evaluation of classification performance, each of them with different origins and areas of application. These metrics include accuracy, unweighted accuracy, the area under the ROC curve or the ROC convex hull, the mean absolute error and the Brier score or mean squared error (with its decomposition into refinement and calibration). One way of understanding the relations among these metrics is by means of variable operating conditions (in the form of misclassification costs and/or class distributions). Thus, a metric may correspond to some expected loss over different operating conditions. One dimension for the analysis has been the distribution for this range of operating conditions, leading to some important connections in the area of proper scoring rules. We demonstrate in this paper that there is an equally important dimension which has so far received much less attention in the analysis of performance metrics. This dimension is given by the decision rule, which is typically implemented as a threshold choice method when using scoring models. In this paper, we explore many old and new threshold choice methods: fixed, score-uniform, score-driven, rate-driven and optimal, among others. By calculating the expected loss obtained with these threshold choice methods for a uniform range of operating conditions we give clear interpretations of the 0-1 loss, the absolute error, the Brier score, the AUC and the refinement loss respectively. Our analysis provides a comprehensive view of performance metrics as well as a systematic approach to loss minimisation which can be summarised as follows: given a model, apply the threshold choice methods that correspond with the available information about the operating condition, and compare their expected losses. In order to assist in this procedure we also derive several connections between the aforementioned performance metrics, and we highlight the role of calibration in choosing the threshold choice method.José Hernández-Orallo; Flach, P.; Ferri Ramírez, C. (2012). A unified view of performance metrics: translating threshold choice into expected classification loss. Journal of Machine Learning Research. 13:2813-2869. http://hdl.handle.net/10251/47702S281328691

    A Hybrid Continual Machine Learning Model for Efficient Hierarchical Classification of Domain-Specific Text in The Presence of Class Overlap (Case Study: IT Support Tickets)

    Get PDF
    In today’s world, support ticketing systems are employed by a wide range of businesses. The ticketing system facilitates the interaction between customers and the support teams when the customer faces an issue with a product or a service. For large-scale IT companies with a large number of clients and a great volume of communications, the task of automating the classification of incoming tickets is key to guaranteeing long-term clients and ensuring business growth. Although the problem of text classification has been widely studied in the literature, the majority of the proposed approaches revolve around state-of-the-art deep learning models. This thesis addresses the following research questions: What are the reasons behind employing black box models (i.e., deep learning models) for text classification tasks? What is the level of polysemy (i.e., the coexistence of many possible meanings for a word or phrase) in a technical (i.e., specialized) text? How do static word embeddings like Word2vec fare against traditional TFIDF vectorization? How do dynamic word embeddings (e.g., PLMs) compare against a linear classifier such as Support Vector Machine (SVM) for classifying a domain-specific text? This integrated article thesis aims to investigate the aforementioned issues through five empirical studies that were conducted over the past four years. The observation of our studies is an emerging theory that demonstrates why traditional ML models offer a more efficient solution to domain-specific text classification compared to state-of-the-art DL language models (i.e., PLMs). Based on extensive experiments on a real-world dataset, we propose a novel Hybrid Online Offline Model (HOOM) that can efficiently classify IT Support Tickets in a real-time (i.e., dynamic) environment. Our classification model is anticipated to build trust and confidence when deployed into production as the model is interpretable, efficient, and can detect concept drifts in the data

    OBTAINING ACCURATE PROBABILITIES USING CLASSIFIER CALIBRATION

    Get PDF
    Learning probabilistic classification and prediction models that generate accurate probabilities is essential in many prediction and decision-making tasks in machine learning and data mining. One way to achieve this goal is to post-process the output of classification models to obtain more accurate probabilities. These post-processing methods are often referred to as calibration methods in the machine learning literature. This thesis describes a suite of parametric and non-parametric methods for calibrating the output of classification and prediction models. In order to evaluate the calibration performance of a classifier, we introduce two new calibration measures that are intuitive statistics of the calibration curves. We present extensive experimental results on both simulated and real datasets to evaluate the performance of the proposed methods compared with commonly used calibration methods in the literature. In particular, in terms of binary classifier calibration, our experimental results show that the proposed methods are able to improve the calibration power of classifiers while retaining their discrimination performance. Our theoretical findings show that by using a simple non-parametric calibration method, it is possible to improve the calibration performance of a classifier without sacrificing discrimination capability. The methods are also computationally tractable for large-scale datasets as they run in O(N log N) time, where N is the number of samples. In this thesis we also introduce a novel framework to derive calibrated probabilities of causal relationships from observational data. The framework consists of three main components: (1) an approximate method for generating initial probability estimates of the edge types for each pair of variables, (2) the availability of a relatively small number of the causal relationships in the network for which the truth status is known, which we call a calibration training set, and (3) a calibration method for using the approximate probability estimates and the calibration training set to generate calibrated probabilities for the many remaining pairs of variables. Our experiments on a range of simulated data support that the proposed approach improves the calibration of edge predictions. The results also support that the approach often improves the precision and recall of those predictions
    corecore