142 research outputs found

    Submodular relaxation for inference in Markov random fields

    Full text link
    In this paper we address the problem of finding the most probable state of a discrete Markov random field (MRF), also known as the MRF energy minimization problem. The task is known to be NP-hard in general and its practical importance motivates numerous approximate algorithms. We propose a submodular relaxation approach (SMR) based on a Lagrangian relaxation of the initial problem. Unlike the dual decomposition approach of Komodakis et al., 2011 SMR does not decompose the graph structure of the initial problem but constructs a submodular energy that is minimized within the Lagrangian relaxation. Our approach is applicable to both pairwise and high-order MRFs and allows to take into account global potentials of certain types. We study theoretical properties of the proposed approach and evaluate it experimentally.Comment: This paper is accepted for publication in IEEE Transactions on Pattern Analysis and Machine Intelligenc

    A systematic literature review on the use of artificial intelligence in energy self-management in smart buildings

    Get PDF
    Buildings are one of the main consumers of energy in cities, which is why a lot of research has been generated around this problem. Especially, the buildings energy management systems must improve in the next years. Artificial intelligence techniques are playing and will play a fundamental role in these improvements. This work presents a systematic review of the literature on researches that have been done in recent years to improve energy management systems for smart building using artificial intelligence techniques. An originality of the work is that they are grouped according to the concept of "Autonomous Cycles of Data Analysis Tasks", which defines that an autonomous management system requires specialized tasks, such as monitoring, analysis, and decision-making tasks for reaching objectives in the environment, like improve the energy efficiency. This organization of the work allows us to establish not only the positioning of the researches, but also, the visualization of the current challenges and opportunities in each domain. We have identified that many types of researches are in the domain of decision-making (a large majority on optimization and control tasks), and defined potential projects related to the development of autonomous cycles of data analysis tasks, feature engineering, or multi-agent systems, among others.European Commissio

    Non-Convex and Geometric Methods for Tomography and Label Learning

    Get PDF
    Data labeling is a fundamental problem of mathematical data analysis in which each data point is assigned exactly one single label (prototype) from a finite predefined set. In this thesis we study two challenging extensions, where either the input data cannot be observed directly or prototypes are not available beforehand. The main application of the first setting is discrete tomography. We propose several non-convex variational as well as smooth geometric approaches to joint image label assignment and reconstruction from indirect measurements with known prototypes. In particular, we consider spatial regularization of assignments, based on the KL-divergence, which takes into account the smooth geometry of discrete probability distributions endowed with the Fisher-Rao (information) metric, i.e. the assignment manifold. Finally, the geometric point of view leads to a smooth flow evolving on a Riemannian submanifold including the tomographic projection constraints directly into the geometry of assignments. Furthermore we investigate corresponding implicit numerical schemes which amount to solving a sequence of convex problems. Likewise, for the second setting, when the prototypes are absent, we introduce and study a smooth dynamical system for unsupervised data labeling which evolves by geometric integration on the assignment manifold. Rigorously abstracting from ``data-label'' to ``data-data'' decisions leads to interpretable low-rank data representations, which themselves are parameterized by label assignments. The resulting self-assignment flow simultaneously performs learning of latent prototypes in the very same framework while they are used for inference. Moreover, a single parameter, the scale of regularization in terms of spatial context, drives the entire process. By smooth geodesic interpolation between different normalizations of self-assignment matrices on the positive definite matrix manifold, a one-parameter family of self-assignment flows is defined. Accordingly, the proposed approach can be characterized from different viewpoints such as discrete optimal transport, normalized spectral cuts and combinatorial optimization by completely positive factorizations, each with additional built-in spatial regularization

    W Boson Polarization Studies for Vector Boson Scattering at LHC: from Classical Approaches to Quantum Computing

    Get PDF
    The Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) has, in the recent years, delivered unprecedented high-energy proton-proton collisions that have been collected and studied by two multi-purpose experiments, ATLAS and CMS. In this thesis, we focus on one physics process in particular, the Vector Boson Scattering (VBS), which is one of the keys to probe the ElectroWeak sector of the Standard Model in the TeV regime and to shed light on the mechanism of ElectroWeak symmetry breaking. VBS measurement is extremely challenging, because of its low signal yields, complex final states and large backgrounds. Its understanding requires a coordinated effort of theorists and experimentalists, to explore all possible information about inclusive observables, kinematics and background isolation. The present work wants to contribute to Vector Boson Scattering studies by exploring the possibility to disentangle among W boson polarizations when analyzing a pure VBS sample. This work is organized as follows. In Chapter1, we overview the main concepts related to the Standard Model of particle physics. We introduce the VBS process from a theoretical perspective in Chapter2, underlying its role with respect to the known mechanism of ElectroWeak Symmetry Breaking. We emphasize the importance of regularizing the VBS amplitude by canceling divergences arising from longitudinally polarized vector bosons at high energy. In the same Chapter, we discuss strategies to explore how to identify the contribution of longitudinally polarized W bosons in the VBS process. We investigate the possibility to reconstruct the event kinematics and to thereby develop a technique that would efficiently discriminate between the longitudinal contribution and the rest of the participating processes in the VBS. In Chapter 3, we perform a Montecarlo generator comparison at different orders in perturbation theory, to explore the state-of-art of VBS Montecarlo programs and to provide suggestions and limits to the experimental community. In the last part of the same Chapter we provide an estimation of PDF uncertainty contribution to VBS observables. Chapter 4 introduces the phenomenological study of this work. We perform an extensive study on polarization fraction extraction and on reconstruction of the W boson reference frame. We first make use of traditional kinematic approaches, moving then to a Deep Learning strategy. Finally, in Chapter 5, we test a new technological paradigm, the Quantum Computer, to evaluate its potential in our case study and overall in the HEP sector. This work has been carried on in the framework of a PhD Executive project, in partnership between the University of Pavia and IBM Italia, and has therefore received supports from both the institutions. This work has been funded by the European Community via the COST Action VBSCan, created with the purpose of connecting all the main players involved in Vector Boson Scattering studies at hadron colliders, gathering a solid and multidisciplinary community and aiming at providing the worldwide phenomenological reference on this fundamental process

    A hybrid algorithm for Bayesian network structure learning with application to multi-label learning

    Get PDF
    We present a novel hybrid algorithm for Bayesian network structure learning, called H2PC. It first reconstructs the skeleton of a Bayesian network and then performs a Bayesian-scoring greedy hill-climbing search to orient the edges. The algorithm is based on divide-and-conquer constraint-based subroutines to learn the local structure around a target variable. We conduct two series of experimental comparisons of H2PC against Max-Min Hill-Climbing (MMHC), which is currently the most powerful state-of-the-art algorithm for Bayesian network structure learning. First, we use eight well-known Bayesian network benchmarks with various data sizes to assess the quality of the learned structure returned by the algorithms. Our extensive experiments show that H2PC outperforms MMHC in terms of goodness of fit to new data and quality of the network structure with respect to the true dependence structure of the data. Second, we investigate H2PC's ability to solve the multi-label learning problem. We provide theoretical results to characterize and identify graphically the so-called minimal label powersets that appear as irreducible factors in the joint distribution under the faithfulness condition. The multi-label learning problem is then decomposed into a series of multi-class classification problems, where each multi-class variable encodes a label powerset. H2PC is shown to compare favorably to MMHC in terms of global classification accuracy over ten multi-label data sets covering different application domains. Overall, our experiments support the conclusions that local structural learning with H2PC in the form of local neighborhood induction is a theoretically well-motivated and empirically effective learning framework that is well suited to multi-label learning. The source code (in R) of H2PC as well as all data sets used for the empirical tests are publicly available.Comment: arXiv admin note: text overlap with arXiv:1101.5184 by other author

    Data Mining

    Get PDF
    The availability of big data due to computerization and automation has generated an urgent need for new techniques to analyze and convert big data into useful information and knowledge. Data mining is a promising and leading-edge technology for mining large volumes of data, looking for hidden information, and aiding knowledge discovery. It can be used for characterization, classification, discrimination, anomaly detection, association, clustering, trend or evolution prediction, and much more in fields such as science, medicine, economics, engineering, computers, and even business analytics. This book presents basic concepts, ideas, and research in data mining

    Advances in Graph-Cut Optimization: Multi-Surface Models, Label Costs, and Hierarchical Costs

    Get PDF
    Computer vision is full of problems that are elegantly expressed in terms of mathematical optimization, or energy minimization. This is particularly true of low-level inference problems such as cleaning up noisy signals, clustering and classifying data, or estimating 3D points from images. Energies let us state each problem as a clear, precise objective function. Minimizing the correct energy would, hypothetically, yield a good solution to the corresponding problem. Unfortunately, even for low-level problems we are confronted by energies that are computationally hard—often NP-hard—to minimize. As a consequence, a rather large portion of computer vision research is dedicated to proposing better energies and better algorithms for energies. This dissertation presents work along the same line, specifically new energies and algorithms based on graph cuts. We present three distinct contributions. First we consider biomedical segmentation where the object of interest comprises multiple distinct regions of uncertain shape (e.g. blood vessels, airways, bone tissue). We show that this common yet difficult scenario can be modeled as an energy over multiple interacting surfaces, and can be globally optimized by a single graph cut. Second, we introduce multi-label energies with label costs and provide algorithms to minimize them. We show how label costs are useful for clustering and robust estimation problems in vision. Third, we characterize a class of energies with hierarchical costs and propose a novel hierarchical fusion algorithm with improved approximation guarantees. Hierarchical costs are natural for modeling an array of difficult problems, e.g. segmentation with hierarchical context, simultaneous estimation of motions and homographies, or detecting hierarchies of patterns

    Otimização multi-objetivo em aprendizado de máquina

    Get PDF
    Orientador: Fernando José Von ZubenTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Regressão logística multinomial regularizada, classificação multi-rótulo e aprendizado multi-tarefa são exemplos de problemas de aprendizado de máquina em que objetivos conflitantes, como funções de perda e penalidades que promovem regularização, devem ser simultaneamente minimizadas. Portanto, a perspectiva simplista de procurar o modelo de aprendizado com o melhor desempenho deve ser substituída pela proposição e subsequente exploração de múltiplos modelos de aprendizado eficientes, cada um caracterizado por um compromisso (trade-off) distinto entre os objetivos conflitantes. Comitês de máquinas e preferências a posteriori do tomador de decisão podem ser implementadas visando explorar adequadamente este conjunto diverso de modelos de aprendizado eficientes, em busca de melhoria de desempenho. A estrutura conceitual multi-objetivo para aprendizado de máquina é suportada por três etapas: (1) Modelagem multi-objetivo de cada problema de aprendizado, destacando explicitamente os objetivos conflitantes envolvidos; (2) Dada a formulação multi-objetivo do problema de aprendizado, por exemplo, considerando funções de perda e termos de penalização como objetivos conflitantes, soluções eficientes e bem distribuídas ao longo da fronteira de Pareto são obtidas por um solver determinístico e exato denominado NISE (do inglês Non-Inferior Set Estimation); (3) Esses modelos de aprendizado eficientes são então submetidos a um processo de seleção de modelos que opera com preferências a posteriori, ou a filtragem e agregação para a síntese de ensembles. Como o NISE é restrito a problemas de dois objetivos, uma extensão do NISE capaz de lidar com mais de dois objetivos, denominada MONISE (do inglês Many-Objective NISE), também é proposta aqui, sendo uma contribuição adicional que expande a aplicabilidade da estrutura conceitual proposta. Para atestar adequadamente o mérito da nossa abordagem multi-objetivo, foram realizadas investigações mais específicas, restritas à aprendizagem de modelos lineares regularizados: (1) Qual é o mérito relativo da seleção a posteriori de um único modelo de aprendizado, entre os produzidos pela nossa proposta, quando comparado com outras abordagens de modelo único na literatura? (2) O nível de diversidade dos modelos de aprendizado produzidos pela nossa proposta é superior àquele alcançado por abordagens alternativas dedicadas à geração de múltiplos modelos de aprendizado? (3) E quanto à qualidade de predição da filtragem e agregação dos modelos de aprendizado produzidos pela nossa proposta quando aplicados a: (i) classificação multi-classe, (ii) classificação desbalanceada, (iii) classificação multi-rótulo, (iv) aprendizado multi-tarefa, (v) aprendizado com multiplos conjuntos de atributos? A natureza determinística de NISE e MONISE, sua capacidade de lidar adequadamente com a forma da fronteira de Pareto em cada problema de aprendizado, e a garantia de sempre obter modelos de aprendizado eficientes são aqui pleiteados como responsáveis pelos resultados promissores alcançados em todas essas três frentes de investigação específicasAbstract: Regularized multinomial logistic regression, multi-label classification, and multi-task learning are examples of machine learning problems in which conflicting objectives, such as losses and regularization penalties, should be simultaneously minimized. Therefore, the narrow perspective of looking for the learning model with the best performance should be replaced by the proposition and further exploration of multiple efficient learning models, each one characterized by a distinct trade-off among the conflicting objectives. Committee machines and a posteriori preferences of the decision-maker may be implemented to properly explore this diverse set of efficient learning models toward performance improvement. The whole multi-objective framework for machine learning is supported by three stages: (1) The multi-objective modelling of each learning problem, explicitly highlighting the conflicting objectives involved; (2) Given the multi-objective formulation of the learning problem, for instance, considering loss functions and penalty terms as conflicting objective functions, efficient solutions well-distributed along the Pareto front are obtained by a deterministic and exact solver named NISE (Non-Inferior Set Estimation); (3) Those efficient learning models are then subject to a posteriori model selection, or to ensemble filtering and aggregation. Given that NISE is restricted to two objective functions, an extension for many objectives, named MONISE (Many Objective NISE), is also proposed here, being an additional contribution and expanding the applicability of the proposed framework. To properly access the merit of our multi-objective approach, more specific investigations were conducted, restricted to regularized linear learning models: (1) What is the relative merit of the a posteriori selection of a single learning model, among the ones produced by our proposal, when compared with other single-model approaches in the literature? (2) Is the diversity level of the learning models produced by our proposal higher than the diversity level achieved by alternative approaches devoted to generating multiple learning models? (3) What about the prediction quality of ensemble filtering and aggregation of the learning models produced by our proposal on: (i) multi-class classification, (ii) unbalanced classification, (iii) multi-label classification, (iv) multi-task learning, (v) multi-view learning? The deterministic nature of NISE and MONISE, their ability to properly deal with the shape of the Pareto front in each learning problem, and the guarantee of always obtaining efficient learning models are advocated here as being responsible for the promising results achieved in all those three specific investigationsDoutoradoEngenharia de ComputaçãoDoutor em Engenharia Elétrica2014/13533-0FAPES

    Applications of Markov Random Field Optimization and 3D Neural Network Pruning in Computer Vision

    Get PDF
    Recent years witness the rapid development of Convolutional Neural Network (CNN) in various computer vision applications that were traditionally addressed by Markov Random Field (MRF) optimization methods. Even though CNN based methods achieve high accuracy in these tasks, a high level of fine results are difficult to be achieved. For instance, a pairwise MRF optimization method is capable of segmenting objects with the auxiliary edge information through the second-order terms, which is very uncertain to be achieved by a deep neural network. MRF optimization methods, however, are able to enhance the performance with an explicit theoretical and experimental supports using iterative energy minimization. Secondly, such an edge detector can be learned by CNNs, and thus, seeking to transfer the task of a CNN for another task becomes valuable. It is desirable to fuse the superpixel contours from a state-of-the-art CNN with semantic segmentation results from another state-of-the-art CNN so that such a fusion enhances the object contours in semantic segmentation to be aligned with the superpixel contours. This kind of fusion is not limited to semantic segmentation but also other tasks with a collective effect of multiple off-the-shelf CNNs. While fusing multiple CNNs is useful to enhance the performance, each of such CNNs is usually specifically designed and trained with an empirical configuration of resources. With such a large batch size, however, the joint CNN training is possible to be out of GPU memory. Such a problem is usually involved in efficient CNN training yet with limited resources. This issue is more obvious and severe in 3D CNNs than 2D CNNs due to the high requirement of training resources. To solve the first problem, we propose two fast and differentiable message passing algorithms, namely Iterative Semi-Global Matching Revised (ISGMR) and Parallel Tree-Reweighted Message Passing (TRWP), for both energy minimization problems and deep learning applications. Our experiments on stereo vision dataset and image inpainting dataset validate the effectiveness and efficiency of our methods with minimum energies comparable to the state-of-the-art algorithm TRWS and greatly improve the forward and backward propagation speed using CUDA programming on massive parallel trees. Applying these two methods on deep learning semantic segmentation on PASCAL VOC 2012 with Canny edges achieves enhanced segmentation results measured by mean Intersection over Union (mIoU). In the second problem, to effectively fuse and finetune multiple CNNs, we present a transparent initialization module that identically maps the output of a multiple-layer module to its input at the early stage of finetuning. The pretrained model parameters are then gradually divergent in training as the loss decreases. This transparent initialization has a higher initialization rate than Net2Net and a higher recovery rate compared with random initialization and Xavier initialization. Our experiments validate the effectiveness of the proposed transparent initialization and the sparse encoder with sparse matrix operations. The edges of segmented objects achieve a higher performance ratio and a higher F-measure than other comparable methods. In the third problem, to compress a CNN effectually, especially for resource-inefficient 3D CNNs, we propose a single-shot neuron pruning method with resource constraints. The pruning principle is to remove the neurons with low neuron importance corresponding to small connection sensitivities. The reweighting strategy with the layerwise consumption of memory or FLOPs improves the pruning ability by avoiding infeasible pruning of the whole layer(s). Our experiments on point cloud dataset, ShapeNet, and medical image dataset, BraTS'18, prove the effectiveness of our method. Applying our method to video classification on UCF101 dataset using MobileNetV2 and I3D further strengthens the benefits of our method
    corecore