1,108 research outputs found

    A Simple Iterative Algorithm for Parsimonious Binary Kernel Fisher Discrimination

    Get PDF
    By applying recent results in optimization theory variously known as optimization transfer or majorize/minimize algorithms, an algorithm for binary, kernel, Fisher discriminant analysis is introduced that makes use of a non-smooth penalty on the coefficients to provide a parsimonious solution. The problem is converted into a smooth optimization that can be solved iteratively with no greater overhead than iteratively re-weighted least-squares. The result is simple, easily programmed and is shown to perform, in terms of both accuracy and parsimony, as well as or better than a number of leading machine learning algorithms on two well-studied and substantial benchmarks

    Machine learning for network based intrusion detection: an investigation into discrepancies in findings with the KDD cup '99 data set and multi-objective evolution of neural network classifier ensembles from imbalanced data.

    Get PDF
    For the last decade it has become commonplace to evaluate machine learning techniques for network based intrusion detection on the KDD Cup '99 data set. This data set has served well to demonstrate that machine learning can be useful in intrusion detection. However, it has undergone some criticism in the literature, and it is out of date. Therefore, some researchers question the validity of the findings reported based on this data set. Furthermore, as identified in this thesis, there are also discrepancies in the findings reported in the literature. In some cases the results are contradictory. Consequently, it is difficult to analyse the current body of research to determine the value in the findings. This thesis reports on an empirical investigation to determine the underlying causes of the discrepancies. Several methodological factors, such as choice of data subset, validation method and data preprocessing, are identified and are found to affect the results significantly. These findings have also enabled a better interpretation of the current body of research. Furthermore, the criticisms in the literature are addressed and future use of the data set is discussed, which is important since researchers continue to use it due to a lack of better publicly available alternatives. Due to the nature of the intrusion detection domain, there is an extreme imbalance among the classes in the KDD Cup '99 data set, which poses a significant challenge to machine learning. In other domains, researchers have demonstrated that well known techniques such as Artificial Neural Networks (ANNs) and Decision Trees (DTs) often fail to learn the minor class(es) due to class imbalance. However, this has not been recognized as an issue in intrusion detection previously. This thesis reports on an empirical investigation that demonstrates that it is the class imbalance that causes the poor detection of some classes of intrusion reported in the literature. An alternative approach to training ANNs is proposed in this thesis, using Genetic Algorithms (GAs) to evolve the weights of the ANNs, referred to as an Evolutionary Neural Network (ENN). When employing evaluation functions that calculate the fitness proportionally to the instances of each class, thereby avoiding a bias towards the major class(es) in the data set, significantly improved true positive rates are obtained whilst maintaining a low false positive rate. These findings demonstrate that the issues of learning from imbalanced data are not due to limitations of the ANNs; rather the training algorithm. Moreover, the ENN is capable of detecting a class of intrusion that has been reported in the literature to be undetectable by ANNs. One limitation of the ENN is a lack of control of the classification trade-off the ANNs obtain. This is identified as a general issue with current approaches to creating classifiers. Striving to create a single best classifier that obtains the highest accuracy may give an unfruitful classification trade-off, which is demonstrated clearly in this thesis. Therefore, an extension of the ENN is proposed, using a Multi-Objective GA (MOGA), which treats the classification rate on each class as a separate objective. This approach produces a Pareto front of non-dominated solutions that exhibit different classification trade-offs, from which the user can select one with the desired properties. The multi-objective approach is also utilised to evolve classifier ensembles, which yields an improved Pareto front of solutions. Furthermore, the selection of classifier members for the ensembles is investigated, demonstrating how this affects the performance of the resultant ensembles. This is a key to explaining why some classifier combinations fail to give fruitful solutions

    Novel Neural Network Applications to Mode Choice in Transportation: Estimating Value of Travel Time and Modelling Psycho-Attitudinal Factors

    Get PDF
    Whenever researchers wish to study the behaviour of individuals choosing among a set of alternatives, they usually rely on models based on the random utility theory, which postulates that the single individuals modify their behaviour so that they can maximise of their utility. These models, often identified as discrete choice models (DCMs), usually require the definition of the utilities for each alternative, by first identifying the variables influencing the decisions. Traditionally, DCMs focused on observable variables and treated users as optimizing tools with predetermined needs. However, such an approach is in contrast with the results from studies in social sciences which show that choice behaviour can be influenced by psychological factors such as attitudes and preferences. Recently there have been formulations of DCMs which include latent constructs for capturing the impact of subjective factors. These are called hybrid choice models or integrated choice and latent variable models (ICLV). However, DCMs are not exempt from issues, like, the fact that researchers have to choose the variables to include and their relations to define the utilities. This is probably one of the reasons which has recently lead to an influx of numerous studies using machine learning (ML) methods to study mode choice, in which researchers tried to find alternative methods to analyse travellers’ choice behaviour. A ML algorithm is any generic method that uses the data itself to understand and build a model, improving its performance the more it is allowed to learn. This means they do not require any a priori input or hypotheses on the structure and nature of the relationships between the several variables used as its inputs. ML models are usually considered black-box methods, but whenever researchers felt the need for interpretability of ML results, they tried to find alternative ways to use ML methods, like building them by using some a priori knowledge to induce specific constrains. Some researchers also transformed the outputs of ML algorithms so that they could be interpreted from an economic point of view, or built hybrid ML-DCM models. The object of this thesis is that of investigating the benefits and the disadvantages deriving from adopting either DCMs or ML methods to study the phenomenon of mode choice in transportation. The strongest feature of DCMs is the fact that they produce very precise and descriptive results, allowing for a thorough interpretation of their outputs. On the other hand, ML models offer a substantial benefit by being truly data-driven methods and thus learning most relations from the data itself. As a first contribution, we tested an alternative method for calculating the value of travel time (VTT) through the results of ML algorithms. VTT is a very informative parameter to consider, since the time consumed by individuals whenever they need to travel normally represents an undesirable factor, thus they are usually willing to exchange their money to reduce travel times. The method proposed is independent from the mode-choice functions, so it can be applied to econometric models and ML methods equally, if they allow the estimation of individual level probabilities. Another contribution of this thesis is a neural network (NN) for the estimation of choice models with latent variables as an alternative to DCMs. This issue arose from wanting to include in ML models not only level of service variables of the alternatives, and socio-economic attributes of the individuals, but also psycho-attitudinal indicators, to better describe the influence of psychological factors on choice behaviour. The results were estimated by using two different datasets. Since NN results are dependent on the values of their hyper-parameters and on their initialization, several NNs were estimated by using different hyper-parameters to find the optimal values, which were used to verify the stability of the results with different initializations

    Contributions to Ensemble Classifiers with Image Analysis Applications

    Get PDF
    134 p.Ésta tesis tiene dos aspectos fundamentales, por un lado, la propuesta denuevas arquitecturas de clasificadores y, por otro, su aplicación a el análisis deimagen.Desde el punto de vista de proponer nuevas arquitecturas de clasificaciónla tesis tiene dos contribucciones principales. En primer lugar la propuestade un innovador ensemble de clasificadores basado en arquitecturas aleatorias,como pueden ser las Extreme Learning Machines (ELM), Random Forest (RF) yRotation Forest, llamado Hybrid Extreme Rotation Forest (HERF) y su mejoraAnticipative HERF (AHERF) que conlleva una selección del modelo basada enel rendimiento de predicción para cada conjunto de datos específico. Ademásde lo anterior, proveemos una prueba formal tanto del AHERF, como de laconvergencia de los ensembles de regresores ELMs que mejoran la usabilidad yreproducibilidad de los resultados.En la vertiente de aplicación hemos estado trabajando con dos tipos de imágenes:imágenes hiperespectrales de remote sensing, e imágenes médicas tanto depatologías específicas de venas de sangre como de imágenes para el diagnósticode Alzheimer. En todos los casos los ensembles de clasificadores han sido la herramientacomún además de estrategias especificas de aprendizaje activo basadasen dichos ensembles de clasificadores. En el caso concreto de la segmentaciónde vasos sanguíneos nos hemos enfrentado con problemas, uno relacionado conlos trombos del Aneurismas de Aorta Abdominal en imágenes 3D de tomografíacomputerizada y el otro la segmentación de venas sangineas en la retina. Losresultados en ambos casos en términos de rendimiento en clasificación y ahorrode tiempo en la segmentación humana nos permiten recomendar esos enfoquespara la práctica clínica.Chapter 1Background y contribuccionesDado el espacio limitado para realizar el resumen de la tesis hemos decididoincluir un resumen general con los puntos más importantes, una pequeña introducciónque pudiera servir como background para entender los conceptos básicosde cada uno de los temas que hemos tocado y un listado con las contribuccionesmás importantes.1.1 Ensembles de clasificadoresLa idea de los ensembles de clasificadores fue propuesta por Hansen y Salamon[4] en el contexto del aprendizaje de las redes neuronales artificiales. Sutrabajo mostró que un ensemble de redes neuronales con un esquema de consensogrupal podía mejorar el resultado obtenido con una única red neuronal.Los ensembles de clasificadores buscan obtener unos resultados de clasificaciónmejores combinando clasificadores débiles y diversos [8, 9]. La propuesta inicialde ensemble contenía una colección homogena de clasificadores individuales. ElRandom Forest es un claro ejemplo de ello, puesto que combina la salida de unacolección de árboles de decisión realizando una votación por mayoría [2, 3], yse construye utilizando una técnica de remuestreo sobre el conjunto de datos ycon selección aleatoria de variables.2CHAPTER 1. BACKGROUND Y CONTRIBUCCIONES 31.2 Aprendizaje activoLa construcción de un clasificador supervisado consiste en el aprendizaje de unaasignación de funciones de datos en un conjunto de clases dado un conjunto deentrenamiento etiquetado. En muchas situaciones de la vida real la obtenciónde las etiquetas del conjunto de entrenamiento es costosa, lenta y propensa aerrores. Esto hace que la construcción del conjunto de entrenamiento sea unatarea engorrosa y requiera un análisis manual exaustivo de la imagen. Esto se realizanormalmente mediante una inspección visual de las imágenes y realizandoun etiquetado píxel a píxel. En consecuencia el conjunto de entrenamiento esaltamente redundante y hace que la fase de entrenamiento del modelo sea muylenta. Además los píxeles ruidosos pueden interferir en las estadísticas de cadaclase lo que puede dar lugar a errores de clasificación y/o overfitting. Por tantoes deseable que un conjunto de entrenamiento sea construido de una manera inteligente,lo que significa que debe representar correctamente los límites de clasemediante el muestreo de píxeles discriminantes. La generalización es la habilidadde etiquetar correctamente datos que no se han visto previamente y quepor tanto son nuevos para el modelo. El aprendizaje activo intenta aprovecharla interacción con un usuario para proporcionar las etiquetas de las muestrasdel conjunto de entrenamiento con el objetivo de obtener la clasificación másprecisa utilizando el conjunto de entrenamiento más pequeño posible.1.3 AlzheimerLa enfermedad de Alzheimer es una de las causas más importantes de discapacidaden personas mayores. Dado el envejecimiento poblacional que es una realidaden muchos países, con el aumento de la esperanza de vida y con el aumentodel número de personas mayores, el número de pacientes con demencia aumentarátambién. Debido a la importancia socioeconómica de la enfermedad enlos países occidentales existe un fuerte esfuerzo internacional focalizado en laenfermedad del Alzheimer. En las etapas tempranas de la enfermedad la atrofiacerebral suele ser sutil y está espacialmente distribuida por diferentes regionescerebrales que incluyen la corteza entorrinal, el hipocampo, las estructuras temporaleslateral e inferior, así como el cíngulo anterior y posterior. Son muchoslos esfuerzos de diseño de algoritmos computacionales tratando de encontrarbiomarcadores de imagen que puedan ser utilizados para el diagnóstico no invasivodel Alzheimer y otras enfermedades neurodegenerativas.CHAPTER 1. BACKGROUND Y CONTRIBUCCIONES 41.4 Segmentación de vasos sanguíneosLa segmentación de los vasos sanguíneos [1, 7, 6] es una de las herramientas computacionalesesenciales para la evaluación clínica de las enfermedades vasculares.Consiste en particionar un angiograma en dos regiones que no se superponen:la región vasculares y el fondo. Basándonos en los resultados de dicha particiónse pueden extraer, modelar, manipular, medir y visualizar las superficies vasculares.Éstas estructuras son muy útiles y juegan un rol muy imporntate en lostratamientos endovasculares de las enfermedades vasculares. Las enfermedadesvasculares son una de las principales fuentes de morbilidad y mortalidad en todoel mundo.Aneurisma de Aorta Abdominal El Aneurisma de Aorta Abdominal (AAA)es una dilatación local de la Aorta que ocurre entre las arterias renal e ilíaca. Eldebilitamiento de la pared de la aorta conduce a su deformación y la generaciónde un trombo. Generalmente, un AAA se diagnostica cuando el diámetro anterioposteriormínimo de la aorta alcanza los 3 centímetros [5]. La mayoría delos aneurismas aórticos son asintomáticos y sin complicaciones. Los aneurismasque causan los síntomas tienen un mayor riesgo de ruptura. El dolor abdominalo el dolor de espalda son las dos principales características clínicas que sugiereno bien la reciente expansión o fugas. Las complicaciones son a menudo cuestiónde vida o muerte y pueden ocurrir en un corto espacio de tiempo. Por lo tanto,el reto consiste en diagnosticar lo antes posible la aparición de los síntomas.Imágenes de Retina La evaluación de imágenes del fondo del ojo es una herramientade diagnóstico de la patología vascular y no vascular. Dicha inspecciónpuede revelar hipertensión, diabetes, arteriosclerosis, enfermedades cardiovascularese ictus. Los principales retos para la segmentación de vasos retinianos son:(1) la presencia de lesiones que se pueden interpretar de forma errónea comovasos sanguíneos; (2) bajo contraste alrededor de los vasos más delgados, (3)múltiples escalas de tamaño de los vasos.1.5 ContribucionesÉsta tesis tiene dos tipos de contribuciones. Contribuciones computacionales ycontribuciones orientadas a una aplicación o prácticas.CHAPTER 1. BACKGROUND Y CONTRIBUCCIONES 5Desde un punto de vista computacional las contribuciones han sido las siguientes:¿ Un nuevo esquema de aprendizaje activo usando Random Forest y el cálculode la incertidumbre que permite una segmentación de imágenes rápida,precisa e interactiva.¿ Hybrid Extreme Rotation Forest.¿ Adaptative Hybrid Extreme Rotation Forest.¿ Métodos de aprendizaje semisupervisados espectrales-espaciales.¿ Unmixing no lineal y reconstrucción utilizando ensembles de regresoresELM.Desde un punto de vista práctico:¿ Imágenes médicas¿ Aprendizaje activo combinado con HERF para la segmentación deimágenes de tomografía computerizada.¿ Mejorar el aprendizaje activo para segmentación de imágenes de tomografíacomputerizada con información de dominio.¿ Aprendizaje activo con el clasificador bootstrapped dendritic aplicadoa segmentación de imágenes médicas.¿ Meta-ensembles de clasificadores para detección de Alzheimer conimágenes de resonancia magnética.¿ Random Forest combinado con aprendizaje activo para segmentaciónde imágenes de retina.¿ Segmentación automática de grasa subcutanea y visceral utilizandoresonancia magnética.¿ Imágenes hiperespectrales¿ Unmixing no lineal y reconstrucción utilizando ensembles de regresoresELM.¿ Métodos de aprendizaje semisupervisados espectrales-espaciales concorrección espacial usando AHERF.¿ Método semisupervisado de clasificación utilizando ensembles de ELMsy con regularización espacial

    Adversarial Machine Learning for the Protection of Legitimate Software

    Get PDF
    Obfuscation is the transforming a given program into one that is syntactically different but semantically equivalent. This new obfuscated program now has its code and/or data changed so that they are hidden and difficult for attackers to understand. Obfuscation is an important security tool and used to defend against reverse engineering. When applied to a program, different transformations can be observed to exhibit differing degrees of complexity and changes to the program. Recent work has shown, by studying these side effects, one can associate patterns with different transformations. By taking this into account and attempting to profile these unique side effects, it is possible to create a classifier using machine learning which can analyze transformed software and identifies what transformation was used to put it in its current state. This has the effect of weakening the security of obfuscating transformations used to protect legitimate software. In this research, we explore options to increase the robustness of obfuscation against attackers who utilize machine learning, particular those who use it to identify the type of obfuscation being employed. To accomplish this, we segment our research into three stages. For the first stage, we implement a suite of classifiers that are used to xiv identify the obfuscation used in samples. These establish a baseline for determining the effectiveness of our proposed defenses and make use of three varied feature sets. For the second stage, we explore methods to evade detection by the classifiers. To accomplish this, attacks setup using the principles of adversarial machine learning are carried out as evasion attacks. These attacks take an obfuscated program and make subtle changes to various aspects that will cause it to be mislabeled by the classifiers. The changes made to the programs affect features looked at by our classifiers, focusing mainly on the number and distribution of opcodes within the program. A constraint of these changes is that the program remains semantically unchanged. In addition, we explore a means of algorithmic dead code insertion in to achieve comparable results against a broader range of classifiers. In the third stage, we combine our attack strategies and evaluate the effect of our changes on the strength of obfuscating transformations. We also propose a framework to implement and automate these and other measures. We the following contributions: 1. An evaluation of the effectiveness of supervised learning models at labeling obfuscated transformations. We create these models using three unique feature sets: Code Images, Opcode N-grams, and Gadgets. 2. Demonstration of two approaches to algorithmic dummy code insertion designed to improve the stealth of obfuscating transformations against machine learning: Adversarial Obfuscation and Opcode Expansion 3. A unified version of our two defenses capable of achieving effectiveness against a broad range of classifiers, while also demonstrating its impact on obfuscation metrics
    • …
    corecore