984 research outputs found

    Randomized Reference Classifier with Gaussian Distribution and Soft Confusion Matrix Applied to the Improving Weak Classifiers

    Full text link
    In this paper, an issue of building the RRC model using probability distributions other than beta distribution is addressed. More precisely, in this paper, we propose to build the RRR model using the truncated normal distribution. Heuristic procedures for expected value and the variance of the truncated-normal distribution are also proposed. The proposed approach is tested using SCM-based model for testing the consequences of applying the truncated normal distribution in the RRC model. The experimental evaluation is performed using four different base classifiers and seven quality measures. The results showed that the proposed approach is comparable to the RRC model built using beta distribution. What is more, for some base classifiers, the truncated-normal-based SCM algorithm turned out to be better at discovering objects coming from minority classes.Comment: arXiv admin note: text overlap with arXiv:1901.0882

    Radiomic data mining and machine learning on preoperative pituitary adenoma MRI

    Get PDF
    Pituitary adenomas are among the most frequent intracranial tumors, accounting for the majority of sellar/suprasellar masses in adults. MRI is the preferred imaging modality for detecting pituitary adenomas. Radiomics represents the conversion of digital medical images into mineable high-dimensional data. This process is motivated by the concept that biomedical images contain information that reflects underlying pathophysiology and that these relationships can be revealed via quantitative image analyses. The aim of this thesis is to apply machine learning algorithms on parameters obtained by texture analysis on MRI images in order to distinguish functional from non-functional pituitary macroadenomas, to predict their ki-67 proliferation index class, and to predict pituitary macroadenoma surgical consistency prior to an endoscopic endonasal procedure

    Tackling Uncertainties and Errors in the Satellite Monitoring of Forest Cover Change

    Get PDF
    This study aims at improving the reliability of automatic forest change detection. Forest change detection is of vital importance for understanding global land cover as well as the carbon cycle. Remote sensing and machine learning have been widely adopted for such studies with increasing degrees of success. However, contemporary global studies still suffer from lower-than-satisfactory accuracies and robustness problems whose causes were largely unknown. Global geographical observations are complex, as a result of the hidden interweaving geographical processes. Is it possible that some geographical complexities were not expected in contemporary machine learning? Could they cause uncertainties and errors when contemporary machine learning theories are applied for remote sensing? This dissertation adopts the philosophy of error elimination. We start by explaining the mathematical origins of possible geographic uncertainties and errors in chapter two. Uncertainties are unavoidable but might be mitigated. Errors are hidden but might be found and corrected. Then in chapter three, experiments are specifically designed to assess whether or not the contemporary machine learning theories can handle these geographic uncertainties and errors. In chapter four, we identify an unreported systemic error source: the proportion distribution of classes in the training set. A subsequent Bayesian Optimal solution is designed to combine Support Vector Machine and Maximum Likelihood. Finally, in chapter five, we demonstrate how this type of error is widespread not just in classification algorithms, but also embedded in the conceptual definition of geographic classes before the classification. In chapter six, the sources of errors and uncertainties and their solutions are summarized, with theoretical implications for future studies. The most important finding is that, how we design a classification largely pre-determines what we eventually get out of it. This applies for many contemporary popular classifiers including various types of neural nets, decision tree, and support vector machine. This is a cause of the so-called overfitting problem in contemporary machine learning. Therefore, we propose that the emphasis of classification work be shifted to the planning stage before the actual classification. Geography should not just be the analysis of collected observations, but also about the planning of observation collection. This is where geography, machine learning, and survey statistics meet

    Signal Processing and Machine Learning Techniques Towards Various Real-World Applications

    Get PDF
    abstract: Machine learning (ML) has played an important role in several modern technological innovations and has become an important tool for researchers in various fields of interest. Besides engineering, ML techniques have started to spread across various departments of study, like health-care, medicine, diagnostics, social science, finance, economics etc. These techniques require data to train the algorithms and model a complex system and make predictions based on that model. Due to development of sophisticated sensors it has become easier to collect large volumes of data which is used to make necessary hypotheses using ML. The promising results obtained using ML have opened up new opportunities of research across various departments and this dissertation is a manifestation of it. Here, some unique studies have been presented, from which valuable inference have been drawn for a real-world complex system. Each study has its own unique sets of motivation and relevance to the real world. An ensemble of signal processing (SP) and ML techniques have been explored in each study. This dissertation provides the detailed systematic approach and discusses the results achieved in each study. Valuable inferences drawn from each study play a vital role in areas of science and technology, and it is worth further investigation. This dissertation also provides a set of useful SP and ML tools for researchers in various fields of interest.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Applied artificial intelligence for predicting construction projects delay

    Get PDF
    © 2021 Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license. https://creativecommons.org/licenses/by-nc-nd/4.0/This study presents evidence of a developed ensemble of ensembles predictive model for delay prediction – a global phenomenon that has continued to strangle the construction sector despite considerable mitigation efforts. At first, a review of the existing body of knowledge on influencing factors of construction project delay was used to survey experts to approach its quantitative data collection. Secondly, data cleaning, feature selection, and engineering, hyperparameter optimization, and algorithm evaluation were carried out using the quantitative data to train ensemble machine learning algorithms (EMLA) – bagging, boosting, and naïve bayes, which in turn was used to develop hyperparameter optimized predictive models: Decision Tree, Random Forest, Bagging, Extremely Randomized Trees, Adaptive Boosting (CART), Gradient Boosting Machine, Extreme Gradient Boosting, Bernoulli Naive Bayes, Multinomial Naive Bayes, and Gaussian Naive Bayes. Finally, a multilayer high performant ensemble of ensembles (stacking) predictive model was developed to maximize the overall performance of the EMLA combined. Results from the evaluation metrics: accuracy score, confusion matrix, precision, recall, f1 score, and Compute Area Under the Receiver Operating Characteristic Curve (ROC AUC) indeed proved that ensemble algorithms are capable of improving the predictive force relative to the use of a single algorithm in predicting construction projects delay.Peer reviewe

    Acoustic Approaches to Gender and Accent Identification

    Get PDF
    There has been considerable research on the problems of speaker and language recognition from samples of speech. A less researched problem is that of accent recognition. Although this is a similar problem to language identification, di�erent accents of a language exhibit more fine-grained di�erences between classes than languages. This presents a tougher problem for traditional classification techniques. In this thesis, we propose and evaluate a number of techniques for gender and accent classification. These techniques are novel modifications and extensions to state of the art algorithms, and they result in enhanced performance on gender and accent recognition. The first part of the thesis focuses on the problem of gender identification, and presents a technique that gives improved performance in situations where training and test conditions are mismatched. The bulk of this thesis is concerned with the application of the i-Vector technique to accent identification, which is the most successful approach to acoustic classification to have emerged in recent years. We show that it is possible to achieve high accuracy accent identification without reliance on transcriptions and without utilising phoneme recognition algorithms. The thesis describes various stages in the development of i-Vector based accent classification that improve the standard approaches usually applied for speaker or language identification, which are insu�cient. We demonstrate that very good accent identification performance is possible with acoustic methods by considering di�erent i-Vector projections, frontend parameters, i-Vector configuration parameters, and an optimised fusion of the resulting i-Vector classifiers we can obtain from the same data. We claim to have achieved the best accent identification performance on the test corpus for acoustic methods, with up to 90% identification rate. This performance is even better than previously reported acoustic-phonotactic based systems on the same corpus, and is very close to performance obtained via transcription based accent identification. Finally, we demonstrate that the utilization of our techniques for speech recognition purposes leads to considerably lower word error rates. Keywords: Accent Identification, Gender Identification, Speaker Identification, Gaussian Mixture Model, Support Vector Machine, i-Vector, Factor Analysis, Feature Extraction, British English, Prosody, Speech Recognition

    Heart Diseases Diagnosis Using Artificial Neural Networks

    Get PDF
    Information technology has virtually altered every aspect of human life in the present era. The application of informatics in the health sector is rapidly gaining prominence and the benefits of this innovative paradigm are being realized across the globe. This evolution produced large number of patients’ data that can be employed by computer technologies and machine learning techniques, and turned into useful information and knowledge. This data can be used to develop expert systems to help in diagnosing some life-threating diseases such as heart diseases, with less cost, processing time and improved diagnosis accuracy. Even though, modern medicine is generating huge amount of data every day, little has been done to use this available data to solve challenges faced in the successful diagnosis of heart diseases. Highlighting the need for more research into the usage of robust data mining techniques to help health care professionals in the diagnosis of heart diseases and other debilitating disease conditions. Based on the foregoing, this thesis aims to develop a health informatics system for the classification of heart diseases using data mining techniques focusing on Radial Basis functions and emerging Neural Networks approach. The presented research involves three development stages; firstly, the development of a preliminary classification system for Coronary Artery Disease (CAD) using Radial Basis Function (RBF) neural networks. The research then deploys the deep learning approach to detect three different types of heart diseases i.e. Sleep Apnea, Arrhythmias and CAD by designing two novel classification systems; the first adopt a novel deep neural network method (with Rectified Linear unit activation) design as the second approach in this thesis and the other implements a novel multilayer kernel machine to mimic the behaviour of deep learning as the third approach. Additionally, this thesis uses a dataset obtained from patients, and employs normalization and feature extraction means to explore it in a unique way that facilitates its usage for training and validating different classification methods. This unique dataset is useful to researchers and practitioners working in heart disease treatment and diagnosis. The findings from the study reveal that the proposed models have high classification performance that is comparable, or perhaps exceed in some cases, the existing automated and manual methods of heart disease diagnosis. Besides, the proposed deep-learning models provide better performance when applied on large data sets (e.g., in the case of Sleep Apnea), with reasonable performance with smaller data sets. The proposed system for clinical diagnoses of heart diseases, contributes to the accurate detection of such disease, and could serve as an important tool in the area of clinic support system. The outcome of this study in form of implementation tool can be used by cardiologists to help them make more consistent diagnosis of heart diseases

    Estudio de métodos de construcción de ensembles de clasificadores y aplicaciones

    Get PDF
    La inteligencia artificial se dedica a la creación de sistemas informáticos con un comportamiento inteligente. Dentro de este área el aprendizaje computacional estudia la creación de sistemas que aprenden por sí mismos. Un tipo de aprendizaje computacional es el aprendizaje supervisado, en el cual, se le proporcionan al sistema tanto las entradas como la salida esperada y el sistema aprende a partir de estos datos. Un sistema de este tipo se denomina clasificador. En ocasiones ocurre, que en el conjunto de ejemplos que utiliza el sistema para aprender, el número de ejemplos de un tipo es mucho mayor que el número de ejemplos de otro tipo. Cuando esto ocurre se habla de conjuntos desequilibrados. La combinación de varios clasificadores es lo que se denomina "ensemble", y a menudo ofrece mejores resultados que cualquiera de los miembros que lo forman. Una de las claves para el buen funcionamiento de los ensembles es la diversidad. Esta tesis, se centra en el desarrollo de nuevos algoritmos de construcción de ensembles, centrados en técnicas de incremento de la diversidad y en los problemas desequilibrados. Adicionalmente, se aplican estas técnicas a la solución de varias problemas industriales.Ministerio de Economía y Competitividad, proyecto TIN-2011-2404

    Object detection for big data

    Get PDF
    "May 2014."Dissertation supervisor: Dr. Tony X. Han.Includes vita.We have observed significant advances in object detection over the past few decades and gladly seen the related research has began to contribute to the world: Vehicles could automatically stop before hitting any pedestrian; Face detectors have been integrated into smart phones and tablets; Video surveillance systems could locate the suspects and stop crimes. All these applications demonstrate the substantial research progress on object detection. However learning a robust object detector is still quite challenging due to the fact that object detection is a very unbalanced big data problem. In this dissertation, we aim at improving the object detector's performance from different aspects. For object detection, the state-of-the-art performance is achieved through supervised learning. The performances of object detectors of this kind are mainly determined by two factors: features and underlying classification algorithms. We have done thorough research on both of these factors. Our contribution involves model adaption, local learning, contextual boosting, template learning and feature development. Since the object detection is an unbalanced problem, in which positive examples are hard to be collected, we propose to adapt a general object detector for a specific scenario with a few positive examples; To handle the large intra-class variation problem lying in object detection task, we propose a local adaptation method to learn a set of efficient and effective detectors for a single object category; To extract the effective context from the huge amount of negative data in object detection, we introduce a novel contextual descriptor to iteratively improve the detector; To detect object with a depth sensor, we design an effective depth descriptor; To distinguish the object categories with the similar appearance, we propose a local feature embedding and template selection algorithm, which has been successfully incorporated into a real-world fine-grained object recognition application. All the proposed algorithms and featuIncludes bibliographical references (pages 117-130)

    Contributions to Ensemble Classifiers with Image Analysis Applications

    Get PDF
    134 p.Ésta tesis tiene dos aspectos fundamentales, por un lado, la propuesta denuevas arquitecturas de clasificadores y, por otro, su aplicación a el análisis deimagen.Desde el punto de vista de proponer nuevas arquitecturas de clasificaciónla tesis tiene dos contribucciones principales. En primer lugar la propuestade un innovador ensemble de clasificadores basado en arquitecturas aleatorias,como pueden ser las Extreme Learning Machines (ELM), Random Forest (RF) yRotation Forest, llamado Hybrid Extreme Rotation Forest (HERF) y su mejoraAnticipative HERF (AHERF) que conlleva una selección del modelo basada enel rendimiento de predicción para cada conjunto de datos específico. Ademásde lo anterior, proveemos una prueba formal tanto del AHERF, como de laconvergencia de los ensembles de regresores ELMs que mejoran la usabilidad yreproducibilidad de los resultados.En la vertiente de aplicación hemos estado trabajando con dos tipos de imágenes:imágenes hiperespectrales de remote sensing, e imágenes médicas tanto depatologías específicas de venas de sangre como de imágenes para el diagnósticode Alzheimer. En todos los casos los ensembles de clasificadores han sido la herramientacomún además de estrategias especificas de aprendizaje activo basadasen dichos ensembles de clasificadores. En el caso concreto de la segmentaciónde vasos sanguíneos nos hemos enfrentado con problemas, uno relacionado conlos trombos del Aneurismas de Aorta Abdominal en imágenes 3D de tomografíacomputerizada y el otro la segmentación de venas sangineas en la retina. Losresultados en ambos casos en términos de rendimiento en clasificación y ahorrode tiempo en la segmentación humana nos permiten recomendar esos enfoquespara la práctica clínica.Chapter 1Background y contribuccionesDado el espacio limitado para realizar el resumen de la tesis hemos decididoincluir un resumen general con los puntos más importantes, una pequeña introducciónque pudiera servir como background para entender los conceptos básicosde cada uno de los temas que hemos tocado y un listado con las contribuccionesmás importantes.1.1 Ensembles de clasificadoresLa idea de los ensembles de clasificadores fue propuesta por Hansen y Salamon[4] en el contexto del aprendizaje de las redes neuronales artificiales. Sutrabajo mostró que un ensemble de redes neuronales con un esquema de consensogrupal podía mejorar el resultado obtenido con una única red neuronal.Los ensembles de clasificadores buscan obtener unos resultados de clasificaciónmejores combinando clasificadores débiles y diversos [8, 9]. La propuesta inicialde ensemble contenía una colección homogena de clasificadores individuales. ElRandom Forest es un claro ejemplo de ello, puesto que combina la salida de unacolección de árboles de decisión realizando una votación por mayoría [2, 3], yse construye utilizando una técnica de remuestreo sobre el conjunto de datos ycon selección aleatoria de variables.2CHAPTER 1. BACKGROUND Y CONTRIBUCCIONES 31.2 Aprendizaje activoLa construcción de un clasificador supervisado consiste en el aprendizaje de unaasignación de funciones de datos en un conjunto de clases dado un conjunto deentrenamiento etiquetado. En muchas situaciones de la vida real la obtenciónde las etiquetas del conjunto de entrenamiento es costosa, lenta y propensa aerrores. Esto hace que la construcción del conjunto de entrenamiento sea unatarea engorrosa y requiera un análisis manual exaustivo de la imagen. Esto se realizanormalmente mediante una inspección visual de las imágenes y realizandoun etiquetado píxel a píxel. En consecuencia el conjunto de entrenamiento esaltamente redundante y hace que la fase de entrenamiento del modelo sea muylenta. Además los píxeles ruidosos pueden interferir en las estadísticas de cadaclase lo que puede dar lugar a errores de clasificación y/o overfitting. Por tantoes deseable que un conjunto de entrenamiento sea construido de una manera inteligente,lo que significa que debe representar correctamente los límites de clasemediante el muestreo de píxeles discriminantes. La generalización es la habilidadde etiquetar correctamente datos que no se han visto previamente y quepor tanto son nuevos para el modelo. El aprendizaje activo intenta aprovecharla interacción con un usuario para proporcionar las etiquetas de las muestrasdel conjunto de entrenamiento con el objetivo de obtener la clasificación másprecisa utilizando el conjunto de entrenamiento más pequeño posible.1.3 AlzheimerLa enfermedad de Alzheimer es una de las causas más importantes de discapacidaden personas mayores. Dado el envejecimiento poblacional que es una realidaden muchos países, con el aumento de la esperanza de vida y con el aumentodel número de personas mayores, el número de pacientes con demencia aumentarátambién. Debido a la importancia socioeconómica de la enfermedad enlos países occidentales existe un fuerte esfuerzo internacional focalizado en laenfermedad del Alzheimer. En las etapas tempranas de la enfermedad la atrofiacerebral suele ser sutil y está espacialmente distribuida por diferentes regionescerebrales que incluyen la corteza entorrinal, el hipocampo, las estructuras temporaleslateral e inferior, así como el cíngulo anterior y posterior. Son muchoslos esfuerzos de diseño de algoritmos computacionales tratando de encontrarbiomarcadores de imagen que puedan ser utilizados para el diagnóstico no invasivodel Alzheimer y otras enfermedades neurodegenerativas.CHAPTER 1. BACKGROUND Y CONTRIBUCCIONES 41.4 Segmentación de vasos sanguíneosLa segmentación de los vasos sanguíneos [1, 7, 6] es una de las herramientas computacionalesesenciales para la evaluación clínica de las enfermedades vasculares.Consiste en particionar un angiograma en dos regiones que no se superponen:la región vasculares y el fondo. Basándonos en los resultados de dicha particiónse pueden extraer, modelar, manipular, medir y visualizar las superficies vasculares.Éstas estructuras son muy útiles y juegan un rol muy imporntate en lostratamientos endovasculares de las enfermedades vasculares. Las enfermedadesvasculares son una de las principales fuentes de morbilidad y mortalidad en todoel mundo.Aneurisma de Aorta Abdominal El Aneurisma de Aorta Abdominal (AAA)es una dilatación local de la Aorta que ocurre entre las arterias renal e ilíaca. Eldebilitamiento de la pared de la aorta conduce a su deformación y la generaciónde un trombo. Generalmente, un AAA se diagnostica cuando el diámetro anterioposteriormínimo de la aorta alcanza los 3 centímetros [5]. La mayoría delos aneurismas aórticos son asintomáticos y sin complicaciones. Los aneurismasque causan los síntomas tienen un mayor riesgo de ruptura. El dolor abdominalo el dolor de espalda son las dos principales características clínicas que sugiereno bien la reciente expansión o fugas. Las complicaciones son a menudo cuestiónde vida o muerte y pueden ocurrir en un corto espacio de tiempo. Por lo tanto,el reto consiste en diagnosticar lo antes posible la aparición de los síntomas.Imágenes de Retina La evaluación de imágenes del fondo del ojo es una herramientade diagnóstico de la patología vascular y no vascular. Dicha inspecciónpuede revelar hipertensión, diabetes, arteriosclerosis, enfermedades cardiovascularese ictus. Los principales retos para la segmentación de vasos retinianos son:(1) la presencia de lesiones que se pueden interpretar de forma errónea comovasos sanguíneos; (2) bajo contraste alrededor de los vasos más delgados, (3)múltiples escalas de tamaño de los vasos.1.5 ContribucionesÉsta tesis tiene dos tipos de contribuciones. Contribuciones computacionales ycontribuciones orientadas a una aplicación o prácticas.CHAPTER 1. BACKGROUND Y CONTRIBUCCIONES 5Desde un punto de vista computacional las contribuciones han sido las siguientes:¿ Un nuevo esquema de aprendizaje activo usando Random Forest y el cálculode la incertidumbre que permite una segmentación de imágenes rápida,precisa e interactiva.¿ Hybrid Extreme Rotation Forest.¿ Adaptative Hybrid Extreme Rotation Forest.¿ Métodos de aprendizaje semisupervisados espectrales-espaciales.¿ Unmixing no lineal y reconstrucción utilizando ensembles de regresoresELM.Desde un punto de vista práctico:¿ Imágenes médicas¿ Aprendizaje activo combinado con HERF para la segmentación deimágenes de tomografía computerizada.¿ Mejorar el aprendizaje activo para segmentación de imágenes de tomografíacomputerizada con información de dominio.¿ Aprendizaje activo con el clasificador bootstrapped dendritic aplicadoa segmentación de imágenes médicas.¿ Meta-ensembles de clasificadores para detección de Alzheimer conimágenes de resonancia magnética.¿ Random Forest combinado con aprendizaje activo para segmentaciónde imágenes de retina.¿ Segmentación automática de grasa subcutanea y visceral utilizandoresonancia magnética.¿ Imágenes hiperespectrales¿ Unmixing no lineal y reconstrucción utilizando ensembles de regresoresELM.¿ Métodos de aprendizaje semisupervisados espectrales-espaciales concorrección espacial usando AHERF.¿ Método semisupervisado de clasificación utilizando ensembles de ELMsy con regularización espacial
    • …
    corecore