266 research outputs found

    Norm-Induced Entropies for Decision Forests

    Get PDF
    Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works mus

    On the average uncertainty for systems with nonlinear coupling

    Full text link
    The increased uncertainty and complexity of nonlinear systems have motivated investigators to consider generalized approaches to defining an entropy function. New insights are achieved by defining the average uncertainty in the probability domain as a transformation of entropy functions. The Shannon entropy when transformed to the probability domain is the weighted geometric mean of the probabilities. For the exponential and Gaussian distributions, we show that the weighted geometric mean of the distribution is equal to the density of the distribution at the location plus the scale, i.e. at the width of the distribution. The average uncertainty is generalized via the weighted generalized mean, in which the moment is a function of the nonlinear source. Both the Renyi and Tsallis entropies transform to this definition of the generalized average uncertainty in the probability domain. For the generalized Pareto and Student's t-distributions, which are the maximum entropy distributions for these generalized entropies, the appropriate weighted generalized mean also equals the density of the distribution at the location plus scale. A coupled entropy function is proposed, which is equal to the normalized Tsallis entropy divided by one plus the coupling.Comment: 24 pages, including 4 figures and 1 tabl

    Combining Data-Driven 2D and 3D Human Appearance Models

    Get PDF
    Detailed 2D and 3D body estimation of humans has many applications in our everyday life: interaction with machines, virtual try-on of fashion or product adjustments based on a body size estimate are just some examples. Two key components of such systems are: (1) detailed pose and shape estimation and (2) generation of images. Ideally, they should use 2D images as input signal so that they can be applied easily and on arbitrary digital images. Due to the high complexity of human appearance and the depth ambiguities in 2D space, data driven models are the tool at hand to design such methods. In this work, we consider two aspects of such systems: in the first part, we propose general optimization and implementation techniques for machine learning models and make them available in the form of software packages. In the second part, we present in multiple steps, how the detailed analysis and generation of human appearance based on digital 2D images can be realized. We work with two machine learning methods: Decision Forests and Artificial Neural Networks. The contribution of this thesis to the theory of Decision Forests consists of the introduction of a generalized entropy function that is efficient to evaluate and tunable to specific tasks and allows us to establish relations to frequently used heuristics. For both, Decision Forests and Neural Networks, we present methods for implementation and a software package. Existing methods for 3D body estimation from images usually estimate the 14 most important, pose defining points in 2D and convert them to a 3D `skeleton'. In this work we show that a carefully crafted energy function is sufficient to recover a full 3D body shape automatically from the keypoints. In this way, we devise the first fully automatic method estimating 3D body pose and shape from a 2D image. While this method successfully recovers a coarse 3D pose and shape, it is still a challenge to recover details such as body part rotations. However, for more detailed models, it would be necessary to annotate data with a very rich set of cues. This approach does not scale to large datasets, since the effort per image as well as the required quality could not be reached due to how hard it is to estimate the position of keypoints on the body surface. To solve this problem, we develop a method that can alternate between optimizing the 2D and 3D models, improving them iteratively. The labeling effort for humans remains low. At the same time, we create 2D models reasoning about factors more items than existing methods and we extend the 3D pose and body shape estimation to rotation and body extent. To generate images of people, existing methods usually work with 3D models that are hard to adjust and to use. In contrast, we develop a method that builds on the possibilities for automatic 3D body estimation: we use it to create a dataset of 3D bodies together with 2D clothes and cloth segments. With this information, we develop a data driven model directly producing 2D images of people. Only the broad interplay of 2D and 3D body and appearance models in different forms makes it possible to achieve a high level of detail for analysis and generation of human appearance. The developed techniques can in principle also be used for the analysis and generation of images of other creatures and objects.Detaillierte 2D und 3D Körperschätzung von Menschen hat vielfältige Anwendungen in unser aller Alltag: Interaktion mit Maschinen, virtuelle "Anprobe" von Kleidung oder direkte Produktanpassungen durch Schätzung der Körpermaße sind nur einige Beispiele. Dazu sind Methoden zur (1) detaillierten Posen- und Körpermaßschätzung und (2) Körperdarstellung notwendig. Idealerweise sollten sie digitale 2D Bilder als Ein- und Ausgabemedium verwenden, damit die einfache und allgemeine Anwendbarkeit gewährleistet bleibt. Aufgrund der hohen Komplexität des menschlichen Erscheinungsbilds und der Tiefenmehrdeutigkeit im 2D Raum sind datengetriebene Modelle ein naheliegendes Werkzeug, um solche Methoden zu entwerfen. In dieser Arbeit betrachten wir zwei Aspekte solcher Systeme: im ersten Teil entwickeln wir allgemein anwendbare Techniken für die Optimierung und Implementierung maschineller Lernmethoden und stellen diese in Form von Softwarepaketen bereit. Im zweiten Teil präsentieren wir in mehreren Schritten, wie die detaillierte Analyse und Darstellung von Menschen basierend auf digitalen 2D Bildern bewerkstelligt werden kann. Wir arbeiten dabei mit zwei Methoden zum maschinellen Lernen: Entscheidungswäldern und Künstlichen Neuronalen Netzen. Der Beitrag dieser Dissertation zur Theorie der Entscheidungswälder besteht in der Einführung einer verallgemeinerten Entropiefunktion, die effizient auswertbar und anpassbar ist und es ermöglicht, häufig verwendete Heuristiken besser einzuordnen. Für Entscheidungswälder und für Neuronale Netze beschreiben wir Methoden zur Implementierung und stellen jeweils ein Softwarepaket bereit, welches diese umsetzt. Die bisherigen Methoden zur 3D Körperschätzung aus Bildern beschränken sich auf die automatische Bestimmung der 14 wichtigsten 2D Punkte, welche die Pose definieren und deren Konvertierung in ein 3D "Skelett" Wir zeigen, dass durch die Optimierung einer fein abgestimmten Energiefunktion auch ein voller 3D Körper, nicht nur dessen Skelett, aus automatisch bestimmten 14 Punkten geschätzt werden kann. Damit beschreiben wir die erste vollautomatische Methode, die einen 3D Körper aus einem digitalen 2D Bild schätzt. Die detaillierte 3D Pose, beispielsweise mit Rotationen der Körperteile und die Beschaffenheit des untersuchten Körpers, ist damit noch nicht bestimmbar. Um detailliertere Modelle zu erstellen wäre es notwendig, Daten mit einem hohen Detailgrad zu annotieren. Dies skaliert jedoch nicht zu großen Datenmengen, da sowohl der Zeitaufwand pro Bild, als auch die notwendige Qualität aufgrund der schwierig einzuschätzenden exakten Positionen von Punkten auf der Körperoberfläche nicht erreicht werden können. Um dieses Problem zu lösen entwickeln wir eine Methode, die zwischen der Optimierung der 2D und 3D Modelle alterniert und diese wechselseitig verbessert. Dabei bleibt der Annotationsaufwand für Menschen gering. Gleichzeitig gelingt es, 2D Modelle mit einem Vielfachen an Details bisheriger Methoden zu erstellen und die Schätzung der 3D Pose und des Körpers auf Rotationen und Körperumfang zu erweitern. Um Bilder von Menschen zu generieren, beschränken sich existierende Methoden auf 3D Modelle, die schwer anzupassen und zu verwenden sind. Im Gegensatz dazu nutzen wir in dieser Arbeit einen Ansatz, der auf den Möglichkeiten zur automatischen 3D Posenschätzung basiert: wir nutzen sie, um einen Datensatz aus 3D Körpern mit dazugehörigen 2D Kleidungen und Kleidungssegmenten zu erstellen. Dies erlaubt es uns, ein datengetriebenes Modell zu entwickeln, welches direkt 2D Bilder von Menschen erzeugt. Erst das vielfältige Zusammenspiel von 2D und 3D Körper- und Erscheinungsmodellen in verschiedenen Formen ermöglicht es uns, einen hohen Detailgrad sowohl bei der Analyse als auch der Generierung menschlicher Erscheinung zu erzielen. Die hierfür entwickelten Techniken sind prinzipiell auch für die Analyse und Generierung von Bildern anderer Lebewesen und Objekte anwendbar

    Genetic Adversarial Training of Decision Trees

    Full text link
    We put forward a novel learning methodology for ensembles of decision trees based on a genetic algorithm which is able to train a decision tree for maximizing both its accuracy and its robustness to adversarial perturbations. This learning algorithm internally leverages a complete formal verification technique for robustness properties of decision trees based on abstract interpretation, a well known static program analysis technique. We implemented this genetic adversarial training algorithm in a tool called Meta-Silvae (MS) and we experimentally evaluated it on some reference datasets used in adversarial training. The experimental results show that MS is able to train robust models that compete with and often improve on the current state-of-the-art of adversarial training of decision trees while being much more compact and therefore interpretable and efficient tree models

    Entropy-based feature extraction for electromagnetic discharges classification in high-voltage power generation

    Get PDF
    This work exploits four entropy measures known as Sample, Permutation, Weighted Permutation, and Dispersion Entropy to extract relevant information from Electromagnetic Interference (EMI) discharge signals that are useful in fault diagnosis of High-Voltage (HV) equipment. Multi-class classification algorithms are used to classify or distinguish between various discharge sources such as Partial Discharges (PD), Exciter, Arcing, micro Sparking and Random Noise. The signals were measured and recorded on different sites followed by EMI expert’s data analysis in order to identify and label the discharge source type contained within the signal. The classification was performed both within each site and across all sites. The system performs well for both cases with extremely high classification accuracy within site. This work demonstrates the ability to extract relevant entropy-based features from EMI discharge sources from time-resolved signals requiring minimal computation making the system ideal for a potential application to online condition monitoring based on EMI

    Graph entropy and related topics

    Get PDF

    Learning with Multiple Similarities

    Get PDF
    The notion of similarities between data points is central to many classification and clustering algorithms. We often encounter situations when there are more than one set of pairwise similarity graphs between objects, either arising from different measures of similarity between objects or from a single similarity measure defined on multiple data representations, or a combination of these. Such examples can be found in various applications in computer vision, natural language processing and computational biology. Combining information from these multiple sources is often beneficial in learning meaningful concepts from data. This dissertation proposes novel methods to effectively fuse information from these multiple similarity graphs, targeted towards two fundamental tasks in machine learning - classification and clustering. In particular, I propose two models for learning spectral embedding from multiple similarity graphs using ideas from co-training and co-regularization. Further, I propose a novel approach to the problem of multiple kernel learning (MKL), converting it to a more familiar problem of binary classification in a transformed space. The proposed MKL approach learns a ``good'' linear combination of base kernels by optimizing a quality criterion that is justified both empirically and theoretically. The ideas of the proposed MKL method are also extended to learning nonlinear combinations of kernels, in particular, polynomial kernel combination and more general nonlinear kernel combination using random forests
    corecore