8 research outputs found

    Multiple Resolution Nonparametric Classifiers

    Get PDF
    Bayesian discriminant functions provide optimal classification decision boundaries in the sense of minimizing the average error rate. An operational assumption is that the probability density functions for the individual classes are either known a priori or can be estimated from the data through the use of estimating techniques. The use of Parzen- windows is a popular and theoretically sound choice for such estimation. However, while the minimal average error rate can be achieved when combining Bayes Rule with Parzen-window density estimation, the latter is computationally costly to the point where it may lead to unacceptable run-time performance. We present the Multiple Resolution Nonparametric (MRN) classifier as a new approach for significantly reducing the computational cost of using Parzen-window density estimates without sacrificing the virtues of Bayesian discriminant functions. Performance is evaluated against a standard Parzen-window classifier on several common datasets

    Neural network for ordinal classification of imbalanced data by minimizing a Bayesian cost

    Get PDF
    Ordinal classification of imbalanced data is a challenging problem that appears in many real world applications. The challenge is to simultaneously consider the order of the classes and the class imbalance, which can notably improve the performance metrics. The Bayesian formulation allows to deal with these two characteristics jointly: It takes into account the prior probability of each class and the decision costs, which can be used to include the imbalance and the ordinal information, respectively. We propose to use the Bayesian formulation to train neural networks, which have shown excellent results in many classification tasks. A loss function is proposed to train networks with a single neuron in the output layer and a threshold based decision rule. The loss is an estimate of the Bayesian classification cost, based on the Parzen windows estimator, which is fitted for a thresholded decision. Experiments with several real datasets show that the proposed method provides competitive results in different scenarios, due to its high flexibility to specify the relative importance of the errors in the classification of patterns of different classes, considering the order and independently of the probability of each class.This work was partially supported by Spanish Ministry of Science and Innovation through Thematic Network "MAPAS"(TIN2017-90567-REDT) and by BBVA Foundation through "2-BARBAS" research grant. Funding for APC: Universidad Carlos III de Madrid (Read & Publish Agreement CRUE-CSIC 2023)

    P-gram: positional N-gram for the clustering of machine-generated messages

    Get PDF
    An IT system generates messages for other systems or users to consume, through direct interaction or as system logs. Automatically identifying the types of these machine-generated messages has many applications, such as intrusion detection and system behavior discovery. Among various heuristic methods for automatically identifying message types, the clustering methods based on keyword extraction have been quite effective. However, these methods still suffer from keyword misidentification problems, i.e., some keyword occurrences are wrongly identified as payload and some strings in the payload are wrongly identified as keyword occurrences, leading to the misidentification of the message types. In this paper, we propose a new machine language processing (MLP) approach, called P-gram, specifically designed for identifying keywords in, and subsequently clustering, machine-generated messages. First, we introduce a novel concept and technique, positional n-gram, for message keywords extraction. By associating the position as meta-data with each n-gram, we can more accurately discern which n-grams are keywords of a message and which n-grams are parts of the payload information. Then, the positional keywords are used as features to cluster the messages, and an entropy-based positional weighting method is devised to measure the importance or weight of the positional keywords to each message. Finally, a general centroid clustering method, K-Medoids, is used to leverage the importance of the keywords and cluster messages into groups reflecting their types. We evaluate our method on a range of machine-generated (text and binary) messages from the real-world systems and show that our method achieves higher accuracy than the current state-of-the-art tools

    Neighborhood analysis methods in acoustic modeling for automatic speech recognition

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 121-134).This thesis investigates the problem of using nearest-neighbor based non-parametric methods for performing multi-class class-conditional probability estimation. The methods developed are applied to the problem of acoustic modeling for speech recognition. Neighborhood components analysis (NCA) (Goldberger et al. [2005]) serves as the departure point for this study. NCA is a non-parametric method that can be seen as providing two things: (1) low-dimensional linear projections of the feature space that allow nearest-neighbor algorithms to perform well, and (2) nearest-neighbor based class-conditional probability estimates. First, NCA is used to perform dimensionality reduction on acoustic vectors, a commonly addressed problem in speech recognition. NCA is shown to perform competitively with another commonly employed dimensionality reduction technique in speech known as heteroscedastic linear discriminant analysis (HLDA) (Kumar [1997]). Second, a nearest neighbor-based model related to NCA is created to provide a class-conditional estimate that is sensitive to the possible underlying relationship between the acoustic-phonetic labels. An embedding of the labels is learned that can be used to estimate the similarity or confusability between labels. This embedding is related to the concept of error-correcting output codes (ECOC) and therefore the proposed model is referred to as NCA-ECOC. The estimates provided by this method along with nearest neighbor information is shown to provide improvements in speech recognition performance (2.5% relative reduction in word error rate). Third, a model for calculating class-conditional probability estimates is proposed that generalizes GMM, NCA, and kernel density approaches. This model, called locally-adaptive neighborhood components analysis, LA-NCA, learns different low-dimensional projections for different parts of the space. The models exploits the fact that in different parts of the space different directions may be important for discrimination between the classes. This model is computationally intensive and prone to over-fitting, so methods for sub-selecting neighbors used for providing the classconditional estimates are explored. The estimates provided by LA-NCA are shown to give significant gains in speech recognition performance (7-8% relative reduction in word error rate) as well as phonetic classification.by Natasha Singh-Miller.Ph.D

    Techniques for data pattern selection and abstraction

    Get PDF
    This thesis concerns the problem of prototype reduction in instance-based learning. In order to deal with problems such as storage requirements, sensitivity to noise and computational complexity, various algorithms have been presented that condense the number of stored prototypes, while maintaining competent classification accuracy. Instance selection, which recovers a smaller subset of the original training set, is the most widely used technique for instance reduction. But, prototype abstraction that generates new prototypes to replace the initial ones has also gained a lot of interest recently. The major contribution of this work is the proposal of four novel frameworks for performing prototype reduction, the Class Boundary Preserving algorithm (CBP), a hybrid method that uses both selection and generation of prototypes, Instance Seriation for Prototype Abstraction (ISPA), which is an abstraction algorithm, and two selective techniques, Spectral Instance Reduction (SIR) and Direct Weight Optimization (DWO). CBP is a multi-stage method based on a simple heuristic that is very effective in identifying samples close to class borders. Using a noise filter harmful instances are removed, while the powerful heuristic determines the geometrical distribution of patterns around every instance. Together with the concepts of nearest enemy pairs and mean shift clustering this algorithm decides on the final set of retained prototypes. DWO is a selection model whose output set of prototypes is decided by a set of binary weights. These weights are computed according to an objective function composed of the ratio between the nearest friend and nearest enemy of every sample. In order to obtain good quality results DWO is optimized using a genetic algorithm. ISPA is an abstraction technique that employs the concept of data seriation to organize instances in an arrangement that favours merging between them. As a result, a new set of prototypes is created. Results show that CBP, SIR and DWO, the three major algorithms presented in this thesis, are competent and efficient in terms of at least one of the two basic objectives, classification accuracy and condensation ratio. The comparison against other successful condensation algorithms illustrates the competitiveness of the proposed models. The SIR algorithm presents a set of border discriminating features (BDFs) that depicts the local distribution of friends and enemies of all samples. These are then used along with spectral graph theory to partition the training set in to border and internal instances

    Hierarchische Modelle für das visuelle Erkennen und Lernen von Objekten, Szenen und Aktivitäten

    Get PDF
    In many computer vision applications, objects have to be learned and recognized in images or image sequences. Most of these objects have a hierarchical structure.For example, 3d objects can be decomposed into object parts, and object parts, in turn, into geometric primitives. Furthermore, scenes are composed of objects. And also activities or behaviors can be divided hierarchically into actions, these into individual movements, etc. Hierarchical models are therefore ideally suited for the representation of a wide range of objects used in applications such as object recognition, human pose estimation, or activity recognition. In this work new probabilistic hierarchical models are presented that allow an efficient representation of multiple objects of different categories, scales, rotations, and views. The idea is to exploit similarities between objects, object parts or actions and movements in order to share calculations and avoid redundant information. We will introduce online and offline learning methods, which enable to create efficient hierarchies based on small or large training datasets, in which poses or articulated structures are given by instances. Furthermore, we present inference approaches for fast and robust detection. These new approaches combine the idea of compositional and similarity hierarchies and overcome limitations of previous methods. They will be used in an unified hierarchical framework spatially for object recognition as well as spatiotemporally for activity recognition. The unified generic hierarchical framework allows us to apply the proposed models in different projects. Besides classical object recognition it is used for detection of human poses in a project for gait analysis. The activity detection is used in a project for the design of environments for ageing, to identify activities and behavior patterns in smart homes. In a project for parking spot detection using an intelligent vehicle, the proposed approaches are used to hierarchically model the environment of the vehicle for an efficient and robust interpretation of the scene in real-time.In zahlreichen Computer Vision Anwendungen müssen Objekte in einzelnen Bildern oder Bildsequenzen erlernt und erkannt werden. Viele dieser Objekte sind hierarchisch aufgebaut.So lassen sich 3d Objekte in Objektteile zerlegen und Objektteile wiederum in geometrische Grundkörper. Und auch Aktivitäten oder Verhaltensmuster lassen sich hierarchisch in einzelne Aktionen aufteilen, diese wiederum in einzelne Bewegungen usw. Für die Repräsentation sind hierarchische Modelle dementsprechend gut geeignet. In dieser Arbeit werden neue probabilistische hierarchische Modelle vorgestellt, die es ermöglichen auch mehrere Objekte verschiedener Kategorien, Skalierungen, Rotationen und aus verschiedenen Blickrichtungen effizient zu repräsentieren. Eine Idee ist hierbei, Ähnlichkeiten unter Objekten, Objektteilen oder auch Aktionen und Bewegungen zu nutzen, um redundante Informationen und Mehrfachberechnungen zu vermeiden. In der Arbeit werden online und offline Lernverfahren vorgestellt, die es ermöglichen, effiziente Hierarchien auf Basis von kleinen oder großen Trainingsdatensätzen zu erstellen, in denen Posen und bewegliche Strukturen durch Beispiele gegeben sind. Des Weiteren werden Inferenzansätze zur schnellen und robusten Detektion vorgestellt. Diese werden innerhalb eines einheitlichen hierarchischen Frameworks sowohl räumlich zur Objekterkennung als auch raumzeitlich zur Aktivitätenerkennung verwendet. Das einheitliche Framework ermöglicht die Anwendung des vorgestellten Modells innerhalb verschiedener Projekte. Neben der klassischen Objekterkennung wird es zur Erkennung von menschlichen Posen in einem Projekt zur Ganganalyse verwendet. Die Aktivitätenerkennung wird in einem Projekt zur Gestaltung altersgerechter Lebenswelten genutzt, um in intelligenten Wohnräumen Aktivitäten und Verhaltensmuster von Bewohnern zu erkennen. Im Rahmen eines Projektes zur Parklückenvermessung mithilfe eines intelligenten Fahrzeuges werden die vorgestellten Ansätze verwendet, um das Umfeld des Fahrzeuges hierarchisch zu modellieren und dadurch das Szenenverstehen zu ermöglichen
    corecore