313 research outputs found

    Fast algorithms for the optimum-path forest-based classifier

    Get PDF
    Las aplicaciones de Reconocimiento de Patrones manejan conjuntos de datos en constante crecimiento, tanto en taman˜o como en complejidad. En este trabajo, proponemos y analizamos algoritmos para el clasificador supervisado basado en Bosque de Caminos Optimos (OPF, por sus siglas en ingl´es). Este clasifi-´ cador ha probado ser capaz de proveer resultados comparables a t´ecnicas de reconocimiento de patrones ma´s conocidas, pero con una fase de entrenamiento mucho m´as ra´pida. Sin embargo, aun hay lugar para mejoras. La contribucio´n de este trabajo es la introducci´on de indexacio´n espacial y algoritmos paralelos en las fases de entrenamiento y clasificaci´on del clasificador supervisado OPF. En primer lugar, proponemos un abordaje simple de paralelizacio´n para la fase de entrenamiento. Siguiendo el entrenamiento secuencial tradicional del OPF, mantiene una cola de prioridad para calcular las mejores muestras en cada iteraci´on. Posteriormente, reemplazamos la cola de prioridad por un arreglo y una bu´squeda lineal, con el objetivo de usar una estructura de datos ma´s adecuada para el paralelismo. Mostramos que este abordaje lleva a una mayor localidad temporal y espacial que el anterior, proveyendo mejores tiempos de ejecucio´n. Adicionalmente, mostramos c´omo el uso de la vectorizacio´n en el c´alculo de distancias afecta el tiempo de ejecucio´n y proveemos gu´ıas para su uso adecuado. Para la fase de clasificacio´n, primero buscamos reducir el nu´mero de c´alculos de distancia respecto a las muestras del clasificador y luego introducimos un esquema de paralelizaci´on. Con este objetivo, desarrollamos una nueva teor´ıa para indexar el clasificador OPF en un espacio m´etrico. Luego, la usamos para construir una estructura de datos eficiente que nos permite reducir el nu´mero de c´alculos de distancia con muestras del clasificador. Finalmente, proponemos su paralelizaci´on, obteniendo una clasificaci´on muy ra´pida para muestras nuevas.Tesi

    Automated recognition of lung diseases in CT images based on the optimum-path forest classifier

    Get PDF
    The World Health Organization estimated that around 300 million people have asthma, and 210 million people are affected by Chronic Obstructive Pulmonary Disease (COPD). Also, it is estimated that the number of deaths from COPD increased 30% in 2015 and COPD will become the third major cause of death worldwide by 2030. These statistics about lung diseases get worse when one considers fibrosis, calcifications and other diseases. For the public health system, the early and accurate diagnosis of any pulmonary disease is mandatory for effective treatments and prevention of further deaths. In this sense, this work consists in using information from lung images to identify and classify lung diseases. Two steps are required to achieve these goals: automatically extraction of representative image features of the lungs and recognition of the possible disease using a computational classifier. As to the first step, this work proposes an approach that combines Spatial Interdependence Matrix (SIM) and Visual Information Fidelity (VIF). Concerning the second step, we propose to employ a Gaussian-based distance to be used together with the optimum-path forest (OPF) classifier to classify the lungs under study as normal or with fibrosis, or even affected by COPD. Moreover, to confirm the robustness of OPF in this classification problem, we also considered Support Vector Machines and a Multilayer Perceptron Neural Network for comparison purposes. Overall, the results confirmed the good performance of the OPF configured with the Gaussian distance when applied to SIM- and VIF-based features. The performance scores achieved by the OPF classifier were as follows: average accuracy of 98.2%, total processing time of 117 microseconds in a common personal laptop, and F-score of 95.2% for the three classification classes. These results showed that OPF is a very competitive classifier, and suitable to be used for lung disease classification

    Classification of induced magnetic field signals for the microstructural characterization of sigma phase in duplex stainless steels

    Get PDF
    Duplex stainless steels present excellent mechanical and corrosion resistance properties.However, when heat treated at temperatures above 600 ºC, the undesirable tertiary sigma phaseis formed. This phase presents high hardness, around 900 HV, and it is rich in chromium, thematerial toughness being compromised when the amount of this phase is not less than 4%. Thiswork aimed to develop a solution for the detection of this phase in duplex stainless steels throughthe computational classification of induced magnetic field signals. The proposed solution is based onan Optimum Path Forest classifier, which was revealed to be more robust and effective than Bayes,Artificial Neural Network and Support Vector Machine based classifiers. The induced magneticfield was produced by the interaction between an applied external field and the microstructure.Samples of the 2205 duplex stainless steel were thermal aged in order to obtain different amounts ofsigma phases (up to 18% in content). The obtained classification results were compared against theones obtained by Charpy impact energy test, amount of sigma phase, and analysis of the fracturesurface by scanning electron microscopy and X-ray diffraction. The proposed solution achieved aclassification accuracy superior to 95% and was revealed to be robust to signal noise, being thereforea valid testing tool to be used in this domain

    Aprendizado ativo com aplicações ao diagnóstico de parasitos

    Get PDF
    Orientadores: Alexandre Xavier Falcão, Pedro Jussieu de RezendeTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Conjuntos de imagens têm crescido consideravelmente com o rápido avanço de inúmeras tecnologias de imagens, demandando soluções urgentes para o processamento, organização e recuperação da informação. O processamento, neste caso, objetiva anotar uma dada imagem atribuindo-na um rótulo que representa seu conteúdo semântico. A anotação é crucial para a organizaçao e recuperação efetiva da informação relacionada às imagens. No entanto, a anotação manual é inviável em grandes conjuntos de dados. Além disso, a anotação automática bem sucedida por um classificador de padrões depende fortemente da qualidade de um conjunto de treinamento reduzido. Técnicas de aprendizado ativo têm sido propostas para selecionar, a partir de um grande conjunto, amostras de treinamento representativas, com uma sugestão de rótulo que pode ser confirmado ou corrigido pelo especialista. Apesar disso, essas técnicas muitas vezes ignoram a necessidade de tempos de resposta interativos durante o processo de aprendizado ativo. Portanto, esta tese de doutorado apresenta métodos de aprendizado ativo que podem reduzir e/ou organizar um grande conjunto de dados, tal que a fase de seleção não requer reprocessá-lo inteiramente a cada iteração do aprendizado. Além disso, tal seleção pode ser interrompida quando o número de amostras desejadas, a partir do conjunto de dados reduzido e organizado, é identificado. Os métodos propostos mostram um progresso cada vez maior, primeiro apenas com a redução de dados, e em seguida com a subsequente organização do conjunto reduzido. Esta tese também aborda um problema real --- o diagnóstico de parasitos --- em que a existência de uma classe diversa (isto é, uma classe de impureza), com tamanho muito maior e amostras que são similares a alguns tipos de parasitos, torna a redução de dados consideravelmente menos eficaz. Este problema é finalmente contornado com um tipo de organização de dados diferente, que ainda permite tempos de resposta interativos e produz uma abordagem de aprendizado ativo melhor e robusta para o diagnóstico de parasitos. Os métodos desenvolvidos foram extensivamente avaliados com diferentes tipos de classificadores supervisionados e não-supervisionados utilizando conjunto de dados a partir de aplicações distintas e abordagens baselines que baseiam-se em seleção aleatória de amostras e/ou reprocessamento de todo o conjunto de dados a cada iteração do aprendizado. Por fim, esta tese demonstra que outras melhorias são obtidas com o aprendizado semi-supervisionadoAbstract: Image datasets have grown large with the fast advances and varieties of the imaging technologies, demanding urgent solutions for information processing, organization, and retrieval. Processing here aims to annotate the image by assigning to it a label that represents its semantic content. Annotation is crucial for the effective organization and retrieval of the information related to the images. However, manual annotation is unfeasible in large datasets and successful automatic annotation by a pattern classifier strongly depends on the quality of a much smaller training set. Active learning techniques have been proposed to select those representative training samples from the large dataset with a label suggestion, which can be either confirmed or corrected by the expert. Nevertheless, these techniques very often ignore the need for interactive response times during the active learning process. Therefore, this PhD thesis presents active learning methods that can reduce and/or organize the large dataset such that sample selection does not require to reprocess it entirely at every learning iteration. Moreover, it can be interrupted as soon as a desired number of samples from the reduced and organized dataset is identified. These methods show an increasing progress, first with data reduction only, and then with subsequent organization of the reduced dataset. However, the thesis also addresses a real problem --- the diagnosis of parasites --- in which the existence of a diverse class (i.e., the impurity class), with much larger size and samples that are similar to some types of parasites, makes data reduction considerably less effective. The problem is finally circumvented with a different type of data organization, which still allows interactive response times and yields a better and robust active learning approach for the diagnosis of parasites. The methods have been extensively assessed with different types of unsupervised and supervised classifiers using datasets from distinct applications and baseline approaches that rely on random sample selection and/or reprocess the entire dataset at each learning iteration. Finally, the thesis demonstrates that further improvements are obtained with semi-supervised learningDoutoradoCiência da ComputaçãoDoutora em Ciência da Computaçã

    Improving efficiency and resilience in large-scale computing systems through analytics and data-driven management

    Full text link
    Applications running in large-scale computing systems such as high performance computing (HPC) or cloud data centers are essential to many aspects of modern society, from weather forecasting to financial services. As the number and size of data centers increase with the growing computing demand, scalable and efficient management becomes crucial. However, data center management is a challenging task due to the complex interactions between applications, middleware, and hardware layers such as processors, network, and cooling units. This thesis claims that to improve robustness and efficiency of large-scale computing systems, significantly higher levels of automated support than what is available in today's systems are needed, and this automation should leverage the data continuously collected from various system layers. Towards this claim, we propose novel methodologies to automatically diagnose the root causes of performance and configuration problems and to improve efficiency through data-driven system management. We first propose a framework to diagnose software and hardware anomalies that cause undesired performance variations in large-scale computing systems. We show that by training machine learning models on resource usage and performance data collected from servers, our approach successfully diagnoses 98% of the injected anomalies at runtime in real-world HPC clusters with negligible computational overhead. We then introduce an analytics framework to address another major source of performance anomalies in cloud data centers: software misconfigurations. Our framework discovers and extracts configuration information from cloud instances such as containers or virtual machines. This is the first framework to provide comprehensive visibility into software configurations in multi-tenant cloud platforms, enabling systematic analysis for validating the correctness of software configurations. This thesis also contributes to the design of robust and efficient system management methods that leverage continuously monitored resource usage data. To improve performance under power constraints, we propose a workload- and cooling-aware power budgeting algorithm that distributes the available power among servers and cooling units in a data center, achieving up to 21% improvement in throughput per Watt compared to the state-of-the-art. Additionally, we design a network- and communication-aware HPC workload placement policy that reduces communication overhead by up to 30% in terms of hop-bytes compared to existing policies.2019-07-02T00:00:00

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Learning for Optimization with Virtual Savant

    Get PDF
    Optimization problems arising in multiple fields of study demand efficient algorithms that can exploit modern parallel computing platforms. The remarkable development of machine learning offers an opportunity to incorporate learning into optimization algorithms to efficiently solve large and complex problems. This thesis explores Virtual Savant, a paradigm that combines machine learning and parallel computing to solve optimization problems. Virtual Savant is inspired in the Savant Syndrome, a mental condition where patients excel at a specific ability far above the average. In analogy to the Savant Syndrome, Virtual Savant extracts patterns from previously-solved instances to learn how to solve a given optimization problem in a massively-parallel fashion. In this thesis, Virtual Savant is applied to three optimization problems related to software engineering, task scheduling, and public transportation. The efficacy of Virtual Savant is evaluated in different computing platforms and the experimental results are compared against exact and approximate solutions for both synthetic and realistic instances of the studied problems. Results show that Virtual Savant can find accurate solutions, effectively scale in the problem dimension, and take advantage of the availability of multiple computing resources.Los problemas de optimización que surgen en múltiples campos de estudio demandan algoritmos eficientes que puedan explotar las plataformas modernas de computación paralela. El notable desarrollo del aprendizaje automático ofrece la oportunidad de incorporar el aprendizaje en algoritmos de optimización para resolver problemas complejos y de grandes dimensiones de manera eficiente. Esta tesis explora Savant Virtual, un paradigma que combina aprendizaje automático y computación paralela para resolver problemas de optimización. Savant Virtual está inspirado en el Sı́ndrome de Savant, una condición mental en la que los pacientes se destacan en una habilidad especı́fica muy por encima del promedio. En analogı́a con el sı́ndrome de Savant, Savant Virtual extrae patrones de instancias previamente resueltas para aprender a resolver un determinado problema de optimización de forma masivamente paralela. En esta tesis, Savant Virtual se aplica a tres problemas de optimización relacionados con la ingenierı́a de software, la planificación de tareas y el transporte público. La eficacia de Savant Virtual se evalúa en diferentes plataformas informáticas y los resultados se comparan con soluciones exactas y aproximadas para instancias tanto sintéticas como realistas de los problemas estudiados. Los resultados muestran que Savant Virtual puede encontrar soluciones precisas, escalar eficazmente en la dimensión del problema y aprovechar la disponibilidad de múltiples recursos de cómputo.Fundación Carolina Agencia Nacional de Investigación e Innovación (ANII, Uruguay) Universidad de Cádiz Universidad de la Repúblic

    Learning with Scalability and Compactness

    Get PDF
    Artificial Intelligence has been thriving for decades since its birth. Traditional AI features heuristic search and planning, providing good strategy for tasks that are inherently search-based problems, such as games and GPS searching. In the meantime, machine learning, arguably the hottest subfield of AI, embraces data-driven methodology with great success in a wide range of applications such as computer vision and speech recognition. As a new trend, the applications of both learning and search have shifted toward mobile and embedded devices which entails not only scalability but also compactness of the models. Under this general paradigm, we propose a series of work to address the issues of scalability and compactness within machine learning and its applications on heuristic search. We first focus on the scalability issue of memory-based heuristic search which is recently ameliorated by Maximum Variance Unfolding (MVU), a manifold learning algorithm capable of learning state embeddings as effective heuristics to speed up AA^* search. Though achieving unprecedented online search performance with constraints on memory footprint, MVU is notoriously slow on offline training. To address this problem, we introduce Maximum Variance Correction (MVC), which finds large-scale feasible solutions to MVU by post-processing embeddings from any manifold learning algorithm. It increases the scale of MVU embeddings by several orders of magnitude and is naturally parallel. We further propose Goal-oriented Euclidean Heuristic (GOEH), a variant to MVU embeddings, which preferably optimizes the heuristics associated with goals in the embedding while maintaining their admissibility. We demonstrate unmatched reductions in search time across several non-trivial AA^* benchmark search problems. Through these work, we bridge the gap between the manifold learning literature and heuristic search which have been regarded as fundamentally different, leading to cross-fertilization for both fields. Deep learning has made a big splash in the machine learning community with its superior accuracy performance. However, it comes at a price of huge model size that might involves billions of parameters, which poses great challenges for its use on mobile and embedded devices. To achieve the compactness, we propose HashedNets, a general approach to compressing neural network models leveraging feature hashing. At its core, HashedNets randomly group parameters using a low-cost hash function, and share parameter value within the group. According to our empirical results, a neural network could be 32x smaller with little drop in accuracy performance. We further introduce Frequency-Sensitive Hashed Nets (FreshNets) to extend this hashing technique to convolutional neural network by compressing parameters in the frequency domain. Compared with many AI applications, neural networks seem not graining as much popularity as it should be in traditional data mining tasks. For these tasks, categorical features need to be first converted to numerical representation in advance in order for neural networks to process them. We show that a na\ {i}ve use of the classic one-hot encoding may result in gigantic weight matrices and therefore lead to prohibitively expensive memory cost in neural networks. Inspired by word embedding, we advocate a compellingly simple, yet effective neural network architecture with category embedding. It is capable of directly handling both numerical and categorical features as well as providing visual insights on feature similarities. At the end, we conduct comprehensive empirical evaluation which showcases the efficacy and practicality of our approach, and provides surprisingly good visualization and clustering for categorical features
    corecore