359 research outputs found

    An Intelligent Help-Desk Framework for Effective Troubleshooting

    Get PDF
    Nowadays, technological infrastructure requires an intelligent virtual environment based on decision processes. These processes allow the coordination of individual elements and the tasks that connect them. Thus, incident resolution must be efficient and effective to achieve maximum productivity. In this paper, we present the design and implementation of an intelligent decision-support system applied in technology infrastructure at the University of Seville (Spain). We have used a Case Based Reasoning (CBR) methodology and an ontology to develop an intelligent system for supporting expert diagnosis and intelligent management of incidents. This is an innovative and interdisciplinary approach to knowledge management in problem-solving processes that are related to environmental issues. Our system provides an automatic semantic indexing for the generating of question/answer pairs, a case based reasoning technique for finding similar questions, and an integration of external information sources via ontologies. A real ontology-based question/answer platform named ExpertSOS is presented as a proof of concept. The intelligent diagnosis platform is able to identify and isolate the most likely cause of infrastructure failure in case of a faulty operation

    Intelligent data mining assistance via case-based reasoning and a formal ontology

    Get PDF

    A formal ontology for data mining : principles, design, and evolution

    Get PDF

    SEASALTexp - an explanation-aware architecture for extracting and case-based processing of experiences from internet communities

    Get PDF
    This paper briefly describes SEASALTexp, an extension of the application-independent SEASALT architecture (Sharing Experience using an Agent-based explanation-aware System Architecture LayouT), which offers knowledge acquisition from Internet communities, knowledge modularisation, and agent-based knowledge maintenance complemented with agent-based explanation facilities

    A case-based reasoning system for recommendation of data cleaning algorithms in classification and regression tasks

    Get PDF
    Recently, advances in Information Technologies (social networks, mobile applications, Internet of Things, etc.) generate a deluge of digital data; but to convert these data into useful information for business decisions is a growing challenge. Exploiting the massive amount of data through knowledge discovery (KD) process includes identifying valid, novel, potentially useful and understandable patterns from a huge volume of data. However, to prepare the data is a non-trivial refinement task that requires technical expertise in methods and algorithms for data cleaning. Consequently, the use of a suitable data analysis technique is a headache for inexpert users. To address these problems, we propose a case-based reasoning system (CBR) to recommend data cleaning algorithms for classification and regression tasks. In our approach, we represent the problem space by the meta-features of the dataset, its attributes, and the target variable. The solution space contains the algorithms of data cleaning used for each dataset. We represent the cases through a Data Cleaning Ontology. The case retrieval mechanism is composed of a filter and similarity phases. In the first phase, we defined two filter approaches based on clustering and quartile analysis. These filters retrieve a reduced number of relevant cases. The second phase computes a ranking of the retrieved cases by filter approaches, and it scores a similarity between a new case and the retrieved cases. The retrieval mechanism proposed was evaluated through a set of judges. The panel of judges scores the similarity between a query case against all cases of the case-base (ground truth). The results of the retrieval mechanism reach an average precision on judges ranking of 94.5% in top 3, for top 7 84.55%, while in top 10 78.35%.The authors are grateful to the research groups: Control Learning Systems Optimization Group (CAOS) of the Carlos III University of Madrid and Telematics Engineering Group (GIT) of the University of Cauca for the technical support. In addition, the authors are grateful to COLCIENCIAS for PhD scholarship granted to PhD. David Camilo Corrales. This work has been also supported by: Project Alternativas Innovadoras de Agricultura Inteligente para sistemas productivos agrícolas del departamento del Cauca soportado en entornos de IoT financed by Convocatoria 04C-2018 Banco de Proyectos Conjuntos UEES-Sostenibilidad of Project Red de formación de talento humano para la innovación social y productiva en el Departamento del Cauca InnovAcción Cauca, ID-3848. The Spanish Ministry of Economy, Industry and Competitiveness (Projects TRA2015-63708-R and TRA2016-78886-C3-1-R)

    Framework for data quality in knowledge discovery tasks

    Get PDF
    Actualmente la explosión de datos es tendencia en el universo digital debido a los avances en las tecnologías de la información. En este sentido, el descubrimiento de conocimiento y la minería de datos han ganado mayor importancia debido a la gran cantidad de datos disponibles. Para un exitoso proceso de descubrimiento de conocimiento, es necesario preparar los datos. Expertos afirman que la fase de preprocesamiento de datos toma entre un 50% a 70% del tiempo de un proceso de descubrimiento de conocimiento. Herramientas software basadas en populares metodologías para el descubrimiento de conocimiento ofrecen algoritmos para el preprocesamiento de los datos. Según el cuadrante mágico de Gartner de 2018 para ciencia de datos y plataformas de aprendizaje automático, KNIME, RapidMiner, SAS, Alteryx, y H20.ai son las mejores herramientas para el desucrimiento del conocimiento. Estas herramientas proporcionan diversas técnicas que facilitan la evaluación del conjunto de datos, sin embargo carecen de un proceso orientado al usuario que permita abordar los problemas en la calidad de datos. Adem´as, la selección de las técnicas adecuadas para la limpieza de datos es un problema para usuarios inexpertos, ya que estos no tienen claro cuales son los métodos más confiables. De esta forma, la presente tesis doctoral se enfoca en abordar los problemas antes mencionados mediante: (i) Un marco conceptual que ofrezca un proceso guiado para abordar los problemas de calidad en los datos en tareas de descubrimiento de conocimiento, (ii) un sistema de razonamiento basado en casos que recomiende los algoritmos adecuados para la limpieza de datos y (iii) una ontología que representa el conocimiento de los problemas de calidad en los datos y los algoritmos de limpieza de datos. Adicionalmente, esta ontología contribuye en la representacion formal de los casos y en la fase de adaptación, del sistema de razonamiento basado en casos.The creation and consumption of data continue to grow by leaps and bounds. Due to advances in Information and Communication Technologies (ICT), today the data explosion in the digital universe is a new trend. The Knowledge Discovery in Databases (KDD) gain importance due the abundance of data. For a successful process of knowledge discovery is necessary to make a data treatment. The experts affirm that preprocessing phase take the 50% to 70% of the total time of knowledge discovery process. Software tools based on Knowledge Discovery Methodologies offers algorithms for data preprocessing. According to Gartner 2018 Magic Quadrant for Data Science and Machine Learning Platforms, KNIME, RapidMiner, SAS, Alteryx and H20.ai are the leader tools for knowledge discovery. These software tools provide different techniques and they facilitate the evaluation of data analysis, however, these software tools lack any kind of guidance as to which techniques can or should be used in which contexts. Consequently, the use of suitable data cleaning techniques is a headache for inexpert users. They have no idea which methods can be confidently used and often resort to trial and error. This thesis presents three contributions to address the mentioned problems: (i) A conceptual framework to provide the user a guidance to address data quality issues in knowledge discovery tasks, (ii) a Case-based reasoning system to recommend the suitable algorithms for data cleaning, and (iii) an Ontology that represent the knowledge in data quality issues and data cleaning methods. Also, this ontology supports the case-based reasoning system for case representation and reuse phase.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: Fernando Fernández Rebollo.- Secretario: Gustavo Adolfo Ramírez.- Vocal: Juan Pedro Caraça-Valente Hernánde

    Proposing to use artificial neural Networks for NoSQL attack detection

    Get PDF
    [EN] Relationships databases have enjoyed a certain boom in software worlds until now. These days, with the rise of modern applications, unstructured data production, traditional databases do not completely meet the needs of all systems. Regarding these issues, NOSQL databases have been developed and are a good alternative. But security aspects stay behind. Injection attacks are the most serious class of web attacks that are not taken seriously in NoSQL. This paper presents a Neural Network model approach for NoSQL injection. This method attempts to use the best and most effective features to identify an injection. The features used are divided into two categories, the first one based on the content of the request, and the second one independent of the request meta parameters. In order to detect attack payloads features, we work on character level analysis to obtain malicious rate of user inputs. The results demonstrate that our model has detected more attack payloads compare with models that work black list approach in keyword level
    corecore