41 research outputs found

    Gaining deep knowledge of Android malware families through dimensionality reduction techniques

    Get PDF
    This research proposes the analysis and subsequent characterisation of Android malware families by means of low dimensional visualisations using dimensional reduction techniques. The well-known Malgenome data set, coming from the Android Malware Genome Project, has been thoroughly analysed through the following six dimensionality reduction techniques: Principal Component Analysis, Maximum Likelihood Hebbian Learning, Cooperative Maximum Likelihood Hebbian Learning, Curvilinear Component Analysis, Isomap and Self Organizing Map. Results obtained enable a clear visual analysis of the structure of this high-dimensionality data set, letting us gain deep knowledge about the nature of such Android malware families. Interesting conclusions are obtained from the real-life data set under analysis

    Improved EMD Using doubly-iterative sifting and high order spline interpolation

    Get PDF
    Empirical mode decomposition (EMD) is a signal analysis method which has received much attention lately due to its application in a number of fields. The main disadvantage of EMD is that it lacks a theoretical analysis and, therefore, our understanding of EMD comes from an intuitive and experimental validation of the method. Recent research on EMD revealed improved criteria for the interpolation points selection. More specifically, it was shown that the performance of EMD can be significantly enhanced if, as interpolation points, instead of the signal extrema, the extrema of the subsignal having the higher instantaneous frequency are used. Even if the extrema of the subsignal with the higher instantaneous frequency are not known in advance, this new interpolation points criterion can be effectively exploited in doubly-iterative sifting schemes leading to improved decomposition performance. In this paper, the possibilities and limitations of the developments above are explored and the new methods are compared with the conventional EMD

    HyRA: A Hybrid Recommendation Algorithm Focused on Smart POI. Ceutí as a Study Scenario

    Get PDF
    Nowadays, Physical Web together with the increase in the use of mobile devices, Global Positioning System (GPS), and Social Networking Sites (SNS) have caused users to share enriched information on theWeb such as their tourist experiences. Therefore, an area that has been significantly improved by using the contextual information provided by these technologies is tourism. In this way, the main goals of this work are to propose and develop an algorithm focused on the recommendation of Smart Point of Interaction (Smart POI) for a specific user according to his/her preferences and the Smart POIs’ context. Hence, a novel Hybrid Recommendation Algorithm (HyRA) is presented by incorporating an aggregation operator into the user-based Collaborative Filtering (CF) algorithm as well as including the Smart POIs’ categories and geographical information. For the experimental phase, two real-world datasets have been collected and preprocessed. In addition, one Smart POIs’ categories dataset was built. As a result, a dataset composed of 16 Smart POIs, another constituted by the explicit preferences of 200 respondents, and the last dataset integrated by 13 Smart POIs’ categories are provided. The experimental results show that the recommendations suggested by HyRA are promising.Project (the SmartSDK project is co-funded by the EU’s Horizon2020 programme under agreement number 723174 - c 2016 EC and the CONACYT’s agreement number 737373) Doctorado IndustrialAdministración y Dirección de EmpresasIngeniería, Industria y ConstrucciónTurism

    An improved Ant Colony System for the Sequential Ordering Problem

    Full text link
    It is not rare that the performance of one metaheuristic algorithm can be improved by incorporating ideas taken from another. In this article we present how Simulated Annealing (SA) can be used to improve the efficiency of the Ant Colony System (ACS) and Enhanced ACS when solving the Sequential Ordering Problem (SOP). Moreover, we show how the very same ideas can be applied to improve the convergence of a dedicated local search, i.e. the SOP-3-exchange algorithm. A statistical analysis of the proposed algorithms both in terms of finding suitable parameter values and the quality of the generated solutions is presented based on a series of computational experiments conducted on SOP instances from the well-known TSPLIB and SOPLIB2006 repositories. The proposed ACS-SA and EACS-SA algorithms often generate solutions of better quality than the ACS and EACS, respectively. Moreover, the EACS-SA algorithm combined with the proposed SOP-3-exchange-SA local search was able to find 10 new best solutions for the SOP instances from the SOPLIB2006 repository, thus improving the state-of-the-art results as known from the literature. Overall, the best known or improved solutions were found in 41 out of 48 cases.Comment: 30 pages, 8 tables, 11 figure

    Domain ontology learning from the web

    Get PDF
    El Aprendizaje de Ontologías se define como el conjunto de métodos utilizados para construir, enriquecer o adaptar una ontología existente de forma semiautomática, utilizando fuentes de información heterogéneas. En este proceso se emplea texto, diccionarios electrónicos, ontologías lingüísticas e información estructurada y semiestructurada para extraer conocimiento. Recientemente, gracias al enorme crecimiento de la Sociedad de la Información, la Web se ha convertido en una valiosa fuente de información para casi cualquier dominio. Esto ha provocado que los investigadores empiecen a considerar a la Web como un repositorio válido para Recuperar Información y Adquirir Conocimiento. No obstante, la Web presenta algunos problemas que no se observan en repositorios de información clásicos: presentación orientada al usuario, ruido, fuentes no confiables, alta dinamicidad y tamaño abrumador. Pese a ello, también presenta algunas características que pueden ser interesantes para la adquisición de conocimiento: debido a su enorme tamaño y heterogeneidad, se asume que la Web aproxima la distribución real de la información a nivel global. Este trabajo describe una aproximación novedosa para el aprendizaje de ontologías, presentando nuevos métodos para adquirir conocimiento de la Web. La propuesta se distingue de otros trabajos previos principalmente en la particular adaptación de algunas técnicas clásicas de aprendizaje al corpus Web y en la explotación de las características interesantes del entorno Web para componer una aproximación automática, no supervisada e independiente del dominio. Con respecto al proceso de construcción de la ontologías, se han desarrollado los siguientes métodos: i) extracción y selección de términos relacionados con el dominio, organizándolos de forma taxonómica; ii) descubrimiento y etiquetado de relaciones no taxonómicas entre los conceptos; iii) métodos adicionales para mejorar la estructura final, incluyendo la detección de entidades con nombre, atributos, herencia múltiple e incluso un cierto grado de desambiguación semántica. La metodología de aprendizaje al completo se ha implementado mediante un sistema distribuido basado en agentes, proporcionando una solución escalable. También se ha evaluado para varios dominios de conocimiento bien diferenciados, obteniendo resultados de buena calidad. Finalmente, se han desarrollado varias aplicaciones referentes a la estructuración automática de librerías digitales y recursos Web, y la recuperación de información basada en ontologías.Ontology Learning is defined as the set of methods used for building from scratch, enriching or adapting an existing ontology in a semi-automatic fashion using heterogeneous information sources. This data-driven procedure uses text, electronic dictionaries, linguistic ontologies and structured and semi-structured information to acquire knowledge. Recently, with the enormous growth of the Information Society, the Web has become a valuable source of information for almost every possible domain of knowledge. This has motivated researchers to start considering the Web as a valid repository for Information Retrieval and Knowledge Acquisition. However, the Web suffers from problems that are not typically observed in classical information repositories: human oriented presentation, noise, untrusted sources, high dynamicity and overwhelming size. Even though, it also presents characteristics that can be interesting for knowledge acquisition: due to its huge size and heterogeneity it has been assumed that the Web approximates the real distribution of the information in humankind. The present work introduces a novel approach for ontology learning, introducing new methods for knowledge acquisition from the Web. The adaptation of several well known learning techniques to the web corpus and the exploitation of particular characteristics of the Web environment composing an automatic, unsupervised and domain independent approach distinguishes the present proposal from previous works.With respect to the ontology building process, the following methods have been developed: i) extraction and selection of domain related terms, organising them in a taxonomical way; ii) discovery and label of non-taxonomical relationships between concepts; iii) additional methods for improving the final structure, including the detection of named entities, class features, multiple inheritance and also a certain degree of semantic disambiguation. The full learning methodology has been implemented in a distributed agent-based fashion, providing a scalable solution. It has been evaluated for several well distinguished domains of knowledge, obtaining good quality results. Finally, several direct applications have been developed, including automatic structuring of digital libraries and web resources, and ontology-based Web Information Retrieval

    Intelligent techniques for improved engine fuel economy

    Get PDF
    ii This thesis presents an investigation into a novel method of estimating the trajectory (future direction and elevation) of a vehicle, and subsequently influencing the control of an engine. The technique represents a convenient and robust method of achieving road prediction, to form a fuzzy system that „looks ahead‟, leading potentially to improved fuel consumption and a consequent reduction in exhaust emissions. The work described in this thesis brings together two modern technologies, Neuro-fuzzy techniques and Global Positioning System, and applies them to engine/vehicle control. The intelligent GPS-based control system presented in this thesis utilises information about the current vehicle position and upcoming terrain in order to reduce vehicle fuel consumption as well as improve road safety and comfort. The development of such in-vehicle control systems has provided static and dynamic road information. The vehicle running parameters have been mathematically defined whilst the engin
    corecore