149 research outputs found

    Beta hebbian learning: definition and analysis of a new family of learning rules for exploratory projection pursuit

    Get PDF
    [EN] This thesis comprises an investigation into the derivation of learning rules in artificial neural networks from probabilistic criteria. •Beta Hebbian Learning (BHL). First of all, it is derived a new family of learning rules which are based on maximising the likelihood of the residual from a negative feedback network when such residual is deemed to come from the Beta Distribution, obtaining an algorithm called Beta Hebbian Learning, which outperforms current neural algorithms in Exploratory Projection Pursuit. • Beta-Scale Invariant Map (Beta-SIM). Secondly, Beta Hebbian Learning is applied to a well-known Topology Preserving Map algorithm called Scale Invariant Map (SIM) to design a new of its version called Beta-Scale Invariant Map (Beta-SIM). It is developed to facilitate the clustering and visualization of the internal structure of high dimensional complex datasets effectively and efficiently, specially those characterized by having internal radial distribution. The Beta-SIM behaviour is thoroughly analysed comparing its results, in terms performance quality measures with other well-known topology preserving models. • Weighted Voting Superposition Beta-Scale Invariant Map (WeVoS-Beta-SIM). Finally, the use of ensembles such as the Weighted Voting Superposition (WeVoS) is tested over the previous novel Beta-SIM algorithm, in order to improve its stability and to generate accurate topology maps when using complex datasets. Therefore, the WeVoS-Beta-Scale Invariant Map (WeVoS-Beta-SIM), is presented, analysed and compared with other well-known topology preserving models. All algorithms have been successfully tested using different artificial datasets to corroborate their properties and also with high-complex real datasets.[ES] Esta tesis abarca la investigación sobre la derivación de reglas de aprendizaje en redes neuronales artificiales a partir de criterios probabilísticos. • Beta Hebbian Learning (BHL). En primer lugar, se deriva una nueva familia de reglas de aprendizaje basadas en maximizar la probabilidad del residuo de una red con retroalimentación negativa cuando se considera que dicho residuo proviene de la Distribución Beta, obteniendo un algoritmo llamado Beta Hebbian Learning, que mejora a algoritmos neuronales actuales de búsqueda de proyecciones exploratorias. • Beta-Scale Invariant Map (Beta-SIM). En Segundo lugar, Beta Hebbian Learning se aplica a un conocido algoritmo de Mapa de Preservación de la Topología llamado Scale Invariant Map (SIM) para diseñar una nueva versión llamada Beta-Scale Invariant Map (Beta-SIM). Este nuevo algoritmo ha sido desarrollado para facilitar el agrupamiento y visualización de la estructura interna de conjuntos de datos complejos de alta dimensionalidad de manera eficaz y eficiente, especialmente aquellos caracterizados por tener una distribución radial interna. El comportamiento de Beta-SIM es analizado en profundidad comparando sus resultados, en términos de medidas de calidad de rendimiento con otros modelos bien conocidos de preservación de topología. • Weighted Voting Superposition Beta-Scale Invariant Map (WeVoS-Beta-SIM). Finalmente, el uso de ensembles como el Weighted Voting Superposition (WeVoS) sobre el algoritmo Beta-SIM es probado, con objeto de mejorar su estabilidad y generar mapas topológicos precisos cuando se utilizan conjuntos de datos complejos. Por lo tanto, se presenta, analiza y compara el WeVoS-Beta-Scale Invariant Map (WeVoS-Beta-SIM) con otros modelos bien conocidos de preservación de topología. Todos los algoritmos han sido probados con éxito sobre conjuntos de datos artificiales para corroborar sus propiedades, así como con conjuntos de datos reales de gran complejidad

    Intrusion Detection With Unsupervised Techniques for Network Management Protocols Over Smart Grids

    Get PDF
    [Abstract] The present research work focuses on overcoming cybersecurity problems in the Smart Grid. Smart Grids must have feasible data capture and communications infrastructure to be able to manage the huge amounts of data coming from sensors. To ensure the proper operation of next-generation electricity grids, the captured data must be reliable and protected against vulnerabilities and possible attacks. The contribution of this paper to the state of the art lies in the identification of cyberattacks that produce anomalous behaviour in network management protocols. A novel neural projectionist technique (Beta Hebbian Learning, BHL) has been employed to get a general visual representation of the traffic of a network, making it possible to identify any abnormal behaviours and patterns, indicative of a cyberattack. This novel approach has been validated on 3 different datasets, demonstrating the ability of BHL to detect different types of attacks, more effectively than other state-of-the-art methods

    A novel method for anomaly detection using beta hebbian learning and principal component analysis

    Get PDF
    In this research work a novel two-step system for anomaly detection is presented and tested over several real datasets. In the first step the novel Exploratory Projection Pursuit, Beta Hebbian Learning algorithm, is applied over each dataset, either to reduce the dimensionality of the original dataset or to face nonlinear datasets by generating a new subspace of the original dataset with lower, or even higher, dimensionality selecting the right activation function. Finally, in the second step Principal Component Analysis anomaly detection is applied to the new subspace to detect the anomalies and improve its classification capabilities. This new approach has been tested over several different real datasets, in terms of number of variables, number of samples and number of anomalies. In almost all cases, the novel approach obtained better results in terms of area under the curve with similar standard deviation values. In case of computational cost, this improvement is only remarkable when complexity of the dataset in terms of number of variables is high.CITIC, as a Research Center of the University System of Galicia, is funded by Consellería de Educación, Universidade e Formación Profesional of the Xunta de Galicia through the European Regional Development Fund and the Secretaría Xeral de Universidades (ref. ED431G 2019/01).info:eu-repo/semantics/publishedVersio

    Maximum and Minimum Likelihood Hebbian Learning for Exploratory Projection Pursuit

    Get PDF
    In this paper, we review an extension of the learning rules in a Principal Component Analysis network which has been derived to be optimal for a specific probability density function. We note that this probability density function is one of a family of pdfs and investigate the learning rules formed in order to be optimal for several members of this family. We show that, whereas we have previously (Lai et al., 2000; Fyfe and MacDonald, 2002) viewed the single member of the family as an extension of PCA, it is more appropriate to view the whole family of learning rules as methods of performing Exploratory Projection Pursuit. We illustrate this on both artificial and real data sets

    Gaining deep knowledge of Android malware families through dimensionality reduction techniques

    Get PDF
    [Abstract] This research proposes the analysis and subsequent characterisation of Android malware families by means of low dimensional visualisations using dimensional reduction techniques. The well-known Malgenome data set, coming from the Android Malware Genome Project, has been thoroughly analysed through the following six dimensionality reduction techniques: Principal Component Analysis, Maximum Likelihood Hebbian Learning, Cooperative Maximum Likelihood Hebbian Learning, Curvilinear Component Analysis, Isomap and Self Organizing Map. Results obtained enable a clear visual analysis of the structure of this high-dimensionality data set, letting us gain deep knowledge about the nature of such Android malware families. Interesting conclusions are obtained from the real-life data set under analysis

    Análisis y detección de ataques informáticos mediante sistemas inteligentes de reducción dimensional

    Get PDF
    Programa Oficial de Doutoramento en Enerxía e Propulsión Mariña. 5014P01[Resumen] El presente trabajo de investigación aborda el estudio y desarrollo de una metodología para la detección de ataques informáticos mediante el uso de sistemas y técnicas inteligentes de reducción dimensional en el ámbito de la ciberseguridad. Con esta propuesta se pretende dividir el problema en dos fases. La primera consiste en un reducción dimensional del espacio de entrada original, proyectando los datos sobre un espacio de salida de menor dimensión mediante transformaciones lineales y/o no lineales que permiten obtener una mejor visualización de la estructura interna del conjunto de datos. En la segunda fase se introduce el conocimiento de un experto humano que permite aportar su conocimiento mediante el etiquetado de las muestras en base a las proyecciones obtenidas y su experiencia sobre el problema. Esta novedosa propuesta pone a disposición del usuario final una herramienta sencilla y proporciona unos resultados intuitivos y fácilmente interpretables, permitiendo hacer frente a nuevas amenazas a las que el usuario no se haya visto expuesto, obteniendo resultados altamente satisfactorios en todos los casos reales en los que se ha aplicado. El sistema desarrollado ha sido validado sobre tres supuestos reales diferentes, en los que se ha avanzado en términos de conocimiento con un claro hilo conductor de progreso positivo de la propuesta. En el primero de los casos se efectúa un análisis de un conocido conjunto de datos de malware de Android en el que, mediante técnicas clásicas de reducción dimensional, se efectúa una caracterización de las diversas familias de malware. Para la segunda de las propuestas se trabaja sobre el mismo conjunto de datos, pero en este caso se aplican técnicas más avanzadas e incipientes de reducción dimensional y visualización, consiguiendo que los resultados se mejoren significativamente. En el último de los trabajos se aprovecha el conocimiento de los dos trabajos previos, y se aplica a la detección de intrusión en sistemas informáticos sobre datos de redes, en las que se producen ataques de diversa índole durante procesos de funcionamiento normal de la red.[Abstract] This research work addresses the study and development of a methodology for the detection of computer attacks using intelligent systems and techniques for dimensional reduction in the eld of cybersecurity. This proposal is intended to divide the problem into two phases. The rst consists of a dimensional reduction of the original input space, projecting the data onto a lower-dimensional output space using linear or non-linear transformations that allow a better visualization of the internal structure of the dataset. In the second phase, the experience of an human expert is presented, which makes it possible to contribute his knowledge by labeling the samples based on the projections obtained and his experience on the problem. This innovative proposal makes a simple tool available to the end user and provides intuitive and easily interpretable results, allowing to face new threats to which the user has not been exposed, obtaining highly satisfactory results in all real cases in which has been applied. The developed system has been validated on three di erent real case studies, in which progress has been made in terms of knowledge with a clear guiding thread of positive progress of the proposal. In the rst case, an analysis of a well-known Android malware dataset is carried out, in which a characterization of the various families of malware is developed using classical dimensional reduction techniques. For the second of the proposals, it has been worked on the same data set, but in this case more advanced and incipient techniques of dimensional reduction and visualization are applied, achieving a signi cant improvement in the results. The last work takes advantage of the knowledge of the two previous works, which is applied to the detection of intrusion in computer systems on network dataset, in which attacks of di erent kinds occur during normal network operation processes.[Resumo] Este traballo de investigación aborda o estudo e desenvolvemento dunha metodoloxía para a detección de ataques informáticos mediante o uso de sistemas e técnicas intelixentes de reducción dimensional no ámbito da ciberseguridade. Esta proposta pretende dividir o problema en dúas fases. A primeira consiste nunha redución dimensional do espazo de entrada orixinal, proxectando os datos nun espazo de saída de menor dimensionalidade mediante transformacións lineais ou non lineais que permitan unha mellor visualización da estrutura interna do conxunto de datos. Na segunda fase, introdúcese a experiencia dun experto humano, que lle permite achegar os seus coñecementos etiquetando as mostras en función das proxeccións obtidas e da súa experiencia sobre o problema. Esta proposta innovadora pon a disposición do usuario nal unha ferramenta sinxela e proporciona resultados intuitivos e facilmente interpretables, que permiten facer fronte a novas ameazas ás que o usuario non estivo exposto, obtendo resultados altamente satisfactorios en todos os casos reais nos que se aplicou. O sistema desenvolvido validouse sobre tres supostos reais diferentes, nos que se avanzou en canto ao coñecemento cun claro fío condutor de avance positivo da proposta. No primeiro caso, realízase unha análise dun coñecido conxunto de datos de malware Android, no que se realiza unha caracterización das distintas familias de malware mediante técnicas clásicas de reducción dimensional. Para a segunda das propostas trabállase sobre o mesmo conxunto de datos, pero neste caso aplícanse técnicas máis avanzadas e incipientes de reducción dimensional e visualización, conseguindo que os resultados se melloren notablemente. O último dos traballos aproveita o coñecemento dos dous traballos anteriores, e aplícase á detección de intrusos en sistemas informáticos en datos da rede, nos que se producen ataques de diversa índole durante os procesos normais de funcionamento da rede

    Gaining deep knowledge of Android malware families through dimensionality reduction techniques

    Get PDF
    This research proposes the analysis and subsequent characterisation of Android malware families by means of low dimensional visualisations using dimensional reduction techniques. The well-known Malgenome data set, coming from the Android Malware Genome Project, has been thoroughly analysed through the following six dimensionality reduction techniques: Principal Component Analysis, Maximum Likelihood Hebbian Learning, Cooperative Maximum Likelihood Hebbian Learning, Curvilinear Component Analysis, Isomap and Self Organizing Map. Results obtained enable a clear visual analysis of the structure of this high-dimensionality data set, letting us gain deep knowledge about the nature of such Android malware families. Interesting conclusions are obtained from the real-life data set under analysis

    A novel ensemble Beta-scale invariant map algorithm

    Get PDF
    [Abstract]: This research presents a novel topology preserving map (TPM) called Weighted Voting Supervision -Beta-Scale Invariant Map (WeVoS-Beta-SIM), based on the application of the Weighted Voting Supervision (WeVoS) meta-algorithm to a novel family of learning rules called Beta-Scale Invariant Map (Beta-SIM). The aim of the novel TPM presented is to improve the original models (SIM and Beta-SIM) in terms of stability and topology preservation and at the same time to preserve their original features, especially in the case of radial datasets, where they all are designed to perform their best. These scale invariant TPM have been proved with very satisfactory results in previous researches. This is done by generating accurate topology maps in an effectively and efficiently way. WeVoS meta-algorithm is based on the training of an ensemble of networks and the combination of them to obtain a single one that includes the best features of each one of the networks in the ensemble. WeVoS-Beta-SIM is thoroughly analyzed and successfully demonstrated in this study over 14 diverse real benchmark datasets with diverse number of samples and features, using three different well-known quality measures. In order to present a complete study of its capabilities, results are compared with other topology preserving models such as Self Organizing Maps, Scale Invariant Map, Maximum Likelihood Hebbian Learning-SIM, Visualization Induced SOM, Growing Neural Gas and Beta- Scale Invariant Map. The results obtained confirm that the novel algorithm improves the quality of the single Beta-SIM algorithm in terms of topology preservation and stability without losing performance (where this algorithm has proved to overcome other well-known algorithms). This improvement is more remarkable when complexity of the datasets increases, in terms of number of features and samples and especially in the case of radial datasets improving the Topographic Error

    DIPKIP: A connectionist Knowledge Management System to Identify Knowledge Deficits in Practical Cases

    Get PDF
    This study presents a novel, multidisciplinary research project entitled DIPKIP (data acquisition, intelligent processing, knowledge identification and proposal), which is a Knowledge Management (KM) system that profiles the KM status of a company. Qualitative data is fed into the system that allows it not only to assess the KM situation in the company in a straightforward and intuitive manner, but also to propose corrective actions to improve that situation. DIPKIP is based on four separate steps. An initial “Data Acquisition” step, in which key data is captured, is followed by an “Intelligent Processing” step, using neural projection architectures. Subsequently, the “Knowledge Identification” step catalogues the company into three categories, which define a set of possible theoretical strategic knowledge situations: knowledge deficit, partial knowledge deficit, and no knowledge deficit. Finally, a “Proposal” step is performed, in which the “knowledge processes”—creation/acquisition, transference/distribution, and putting into practice/updating—are appraised to arrive at a coherent recommendation. The knowledge updating process (increasing the knowledge held and removing obsolete knowledge) is in itself a novel contribution. DIPKIP may be applied as a decision support system, which, under the supervision of a KM expert, can provide useful and practical proposals to senior management for the improvement of KM, leading to flexibility, cost savings, and greater competitiveness. The research also analyses the future for powerful neural projection models in the emerging field of KM by reviewing a variety of robust unsupervised projection architectures, all of which are used to visualize the intrinsic structure of high-dimensional data sets. The main projection architecture in this research, known as Cooperative Maximum-Likelihood Hebbian Learning (CMLHL), manages to capture a degree of KM topological ordering based on the application of cooperative lateral connections. The results of two real-life case studies in very different industrial sectors corroborated the relevance and viability of the DIPKIP system and the concepts upon which it is founded
    corecore