73 research outputs found

    Are statistics and machine learning enough to make predictions and forecasts?

    Get PDF
    Currently the techniques used to predict the future are statistical and machine learning techniques. The first continues the trend of historical data. The second learns from previous cases training. Both use historical information but do not take in mind key factors that can make the final result change. A knowledge-based framework is presented that allows predictions of some kind ofevents to be made using artificial intelligence techniques. This requires an expert to enter the key factors that can change the trend of historical data into the system.The current framework has been applied prior to happening to two use cases, obtaining good preliminary results, in the framework of the developing of a PhD Thesis.Instituto de Investigación en InformáticaInstituto de Investigación en Informátic

    AI for Hate Speech Detection in Social Media

    Get PDF
    The main goal of this work focuses on solving the problem of analyzing the data coming from Social Media and exploring the mechanisms for the extraction and representation of knowledge from all the different disciplines outside the world of information Technologies. Soft Computing and Big Data techniques are used to deal with the challenges mentioned. This paper shows a mechanism to detect hate speech in Social Media using Soft Computing and Sentiment Analysis, and it also stablishes the base of a doctoral thesis.Instituto de Investigación en InformáticaInstituto de Investigación en Informátic

    An application of the FIS-CRM model to the FISS metasearcher: Using fuzzy synonymy and fuzzy generality for representing concepts in documents

    Get PDF
    AbstractThe main objective of this work is to improve the quality of the results produced by the Internet search engines. In order to achieve it, the FIS-CRM model (Fuzzy Interrelations and Synonymy based Concept Representation Model) is proposed as a mechanism for representing the concepts (not only terms) contained in any kind of document. This model, based on the vector space model, incorporates a fuzzy readjustment process of the term weights of each document. The readjustment lies on the study of two types of fuzzy interrelations between terms: the fuzzy synonymy interrelation and the fuzzy generality interrelations (“broader than” and “narrower than” interrelations). The model has been implemented in the FISS metasearcher (Fuzzy Interrelations and Synonymy based Searcher) that, using a soft-clustering algorithm (based on the SISC algorithm), dynamically produces a hierarchical structure of groups of “conceptually related” documents (snippets of web pages, in this case)

    La Sobre-evaluación

    Get PDF
    Este trabajo resume una experiencia negativa vivida durante la impartición de una asignatura encuadrada dentro del Espacio Europeo de Educación Superior (EEES) debido a una mala planificación de la evaluación que desemboca en lo que denominaremos sobre-evaluación. Por sobre-evaluación se entiende el excesivo número de pruebas a las que se somete al alumno y que le obligan a pasar más tiempo preparándolas que adquiriendo o asentando conocimientos. Y como consecuencia de esa experiencia negativa se presentan los medios que se han puesto para evitar repetir dicha experiencia en el curso siguiente en la misma asignatura así como las algunas de las conclusiones alcanzadas por los profesores de dicha asignatura.SUMMARY: This work summarizes a negative experience happened during the classes of a subject within the European Space for Higher Education, due to a wrong planning which provoked the so-called phenomenon: over-assessment. Over-assessment means the excessive number of tasks that a student must do. As a result of this fact, each student spends more time preparing his tasks than learning new knowledge. From this negative experiment, the following course the same teachers applied several modifications with respect to that subject in order to avoid past errors. These modifications allowed improving the results obtained by the students, especially, due to the better planning of the tasks proposed to assess students.Peer Reviewe

    Preliminary approach about using nowadays knowledge engineering in artificial intelligence: a literature overview

    Get PDF
    This paper presents a first literature overview on the nowadays use of Knowledge Engineering in the framework of the development of solutions in the field of Artificial Intelligence. With the assumption that the conceptualization, formalization and modeling of data is fundamental in this type of projects, it is considered that Knowledge Engineering can actively collaborate orienting and guiding part of these activities. In this context, a doctoral research line is proposed about the use of Knowledge Engineering to perform Intelligent Data Analysis. This line of research arises from the fact that existing methodologies in the field of Artificial Intelligence and Data Mining do not incorporate complete domain and context knowledge in data analysis. The incorporation of this knowledge allows contrasting the hypotheses obtained from the data and enriches the analysis reaching better results.Instituto de Investigación en Informátic

    Text pre-processing tool to increase the exactness of experimental results in summarization solutions

    Get PDF
    For years, and nowadays even more because of the ease of access to information, countless scientific documents that cover all branches of human knowledge are generated. These documents, consisting mostly of text, are stored in digital libraries that are increasingly consenting access and manipulation. This has allowed these repositories of documents to be used for research work of great interest, particularly those related to evaluation of automatic summaries through experimentation. In this area of computer science, the experimental results of many of the published works are obtained using document collections, some known and others not so much, but without specifying all the special considerations to achieve said results. This produces an unfair competition in the realization of experiments when comparing results and does not allow to be objective in the obtained conclusions. This paper presents a text document manipulation tool to increase the exactness of results when obtaining, evaluating and comparing automatic summaries from different corpora. This work has been motivated by the need to have a tool that allows to process documents, split their content properly and make sure that each text snippet does not lose its contextual information. Applying the model proposed to a set of free-access scientific papers has been successful.XV Workshop Bases de Datos y Minería de Datos (WBDDM)Red de Universidades con Carreras en Informática (RedUNCI

    Text pre-processing tool to increase the exactness of experimental results in summarization solutions

    Get PDF
    For years, and nowadays even more because of the ease of access to information, countless scientific documents that cover all branches of human knowledge are generated. These documents, consisting mostly of text, are stored in digital libraries that are increasingly consenting access and manipulation. This has allowed these repositories of documents to be used for research work of great interest, particularly those related to evaluation of automatic summaries through experimentation. In this area of computer science, the experimental results of many of the published works are obtained using document collections, some known and others not so much, but without specifying all the special considerations to achieve said results. This produces an unfair competition in the realization of experiments when comparing results and does not allow to be objective in the obtained conclusions. This paper presents a text document manipulation tool to increase the exactness of results when obtaining, evaluating and comparing automatic summaries from different corpora. This work has been motivated by the need to have a tool that allows to process documents, split their content properly and make sure that each text snippet does not lose its contextual information. Applying the model proposed to a set of free-access scientific papers has been successful.XV Workshop Bases de Datos y Minería de Datos (WBDDM)Red de Universidades con Carreras en Informática (RedUNCI

    Circulating levels of butyrate are inversely related to portal hypertension, endotoxemia, and systemic inflammation in patients with cirrhosis

    Get PDF
    Short-chain fatty acids (SCFAs) are gut microbiota-derived products that participate in maintaining the gut barrier integrity and host's immune response. We hypothesize that reduced SCFA levels are associated with systemic inflammation, endotoxemia, and more severe hemodynamic alterations in cirrhosis. Patients with cirrhosis referred for a hepatic venous pressure gradient (HVPG) measurement (n = 62) or a transjugular intrahepatic portosystemic shunt placement (n = 12) were included. SCFAs were measured in portal (when available), hepatic, and peripheral blood samples by GC-MS. Serum endotoxins, proinflammatory cytokines, and NO levels were quantified. SCFA levels were significantly higher in portal vs. hepatic and peripheral blood. There were inverse relationships between SCFAs and the severity of disease. SCFAs (mainly butyric acid) inversely correlated with the model for end-stage liver disease score and were further reduced in patients with history of ascites, hepatic encephalopathy, and spontaneous bacterial peritonitis. There was an inverse relationship between butyric acid and HVPG values. SCFAs were directly related with systemic vascular resistance and inversely with cardiac index. Butyric acid inversely correlated with inflammatory markers and serum endotoxin. A global reduction in the blood levels of SCFA in patients with cirrhosis is associated with a more advanced liver disease, suggesting its contribution to disease progression.-Juanola, O., Ferrusquía-Acosta, J., García-Villalba, R., Zapater, P., Magaz, M., Marín, A., Olivas, P., Baiges, A., Bellot, P., Turon, F., Hernández-Gea, V., González-Navajas, J. M., Tomás-Barberán, F. A., García-Pagán, J. C., Francés, R. Circulating levels of butyrate are inversely related to portal hypertension, endotoxemia, and systemic inflammation in patients with cirrhosis

    Text pre-processing tool to increase the exactness of experimental results in summarization solutions

    Get PDF
    For years, and nowadays even more because of the ease of access to information, countless scientific documents that cover all branches of human knowledge are generated. These documents, consisting mostly of text, are stored in digital libraries that are increasingly consenting access and manipulation. This has allowed these repositories of documents to be used for research work of great interest, particularly those related to evaluation of automatic summaries through experimentation. In this area of computer science, the experimental results of many of the published works are obtained using document collections, some known and others not so much, but without specifying all the special considerations to achieve said results. This produces an unfair competition in the realization of experiments when comparing results and does not allow to be objective in the obtained conclusions. This paper presents a text document manipulation tool to increase the exactness of results when obtaining, evaluating and comparing automatic summaries from different corpora. This work has been motivated by the need to have a tool that allows to process documents, split their content properly and make sure that each text snippet does not lose its contextual information. Applying the model proposed to a set of free-access scientific papers has been successful.XV Workshop Bases de Datos y Minería de Datos (WBDDM)Red de Universidades con Carreras en Informática (RedUNCI
    corecore