109 research outputs found

    CASP-DM: Context Aware Standard Process for Data Mining

    Get PDF
    We propose an extension of the Cross Industry Standard Process for Data Mining (CRISPDM) which addresses specific challenges of machine learning and data mining for context and model reuse handling. This new general context-aware process model is mapped with CRISP-DM reference model proposing some new or enhanced outputs

    Learning with con gurable operators and RL-based heuristics

    Full text link
    In this paper, we push forward the idea of machine learning systems for which the operators can be modi ed and netuned for each problem. This allows us to propose a learning paradigm where users can write (or adapt) their operators, according to the problem, data representation and the way the information should be navigated. To achieve this goal, data instances, background knowledge, rules, programs and operators are all written in the same functional language, Erlang. Since changing operators a ect how the search space needs to be explored, heuristics are learnt as a result of a decision process based on reinforcement learning where each action is de ned as a choice of operator and rule. As a result, the architecture can be seen as a `system for writing machine learning systems' or to explore new operators.This work was supported by the MEC projects CONSOLIDER-INGENIO 26706 and TIN 2010-21062-C02-02, GVA project PROMETEO/2008/051, and the REFRAME project granted by the European Coordinated Research on Long-term Challenges in Information and Communication Sciences & Technologies ERA-Net (CHIST-ERA), and funded by the Ministerio de Econom´ıa y Competitividad in Spain. Also, F. Mart´ınez-Plumed is supported by FPI-ME grant BES-2011-045099Martínez Plumed, F.; Ferri Ramírez, C.; Hernández Orallo, J.; Ramírez Quintana, MJ. (2013). Learning with con gurable operators and RL-based heuristics. En New Frontiers in Mining Complex Patterns. Springer Verlag (Germany). 7765:1-16. https://doi.org/10.1007/978-3-642-37382-4_1S1167765Armstrong, J.: A history of erlang. In: Proceedings of the Third ACM SIGPLAN Conf. on History of Programming Languages, HOPL III, pp. 1–26. ACM (2007)Brazdil, P., Giraud-Carrier: Metalearning: Concepts and systems. In: Metalearning. Cognitive Technologies, pp. 1–10. Springer, Heidelberg (2009)Daumé III, H., Langford, J.: Search-based structured prediction (2009)Dietterich, T., Domingos, P., Getoor, L., Muggleton, S., Tadepalli, P.: Structured machine learning: the next ten years. Machine Learning 73, 3–23 (2008)Dietterich, T.G., Lathrop, R., Lozano-Perez, T.: Solving the multiple-instance problem with axis-parallel rectangles. Artificial Intelligence 89, 31–71 (1997)Džeroski, S.: Towards a general framework for data mining. In: Džeroski, S., Struyf, J. (eds.) KDID 2006. LNCS, vol. 4747, pp. 259–300. Springer, Heidelberg (2007)Dzeroski, S., De Raedt, L., Driessens, K.: Relational reinforcement learning. Machine Learning 43, 7–52 (2001), 10.1023/A:1007694015589Dzeroski, S., Lavrac, N. (eds.): Relational Data Mining. Springer (2001)Estruch, V., Ferri, C., Hernández-Orallo, J., Ramírez-Quintana, M.J.: Similarity functions for structured data. an application to decision trees. Inteligencia Artificial, Revista Iberoamericana de Inteligencia Artificial 10(29), 109–121 (2006)Estruch, V., Ferri, C., Hernández-Orallo, J., Ramírez-Quintana, M.J.: Web categorisation using distance-based decision trees. ENTCS 157(2), 35–40 (2006)Estruch, V., Ferri, C., Hernández-Orallo, J., Ramírez-Quintana, M.J.: Bridging the Gap between Distance and Generalisation. Computational Intelligence (2012)Ferri-Ramírez, C., Hernández-Orallo, J., Ramírez-Quintana, M.J.: Incremental learning of functional logic programs. In: Kuchen, H., Ueda, K. (eds.) FLOPS 2001. LNCS, vol. 2024, pp. 233–247. Springer, Heidelberg (2001)Gärtner, T.: Kernels for Structured Data. PhD thesis, Universitat Bonn (2005)Holland, J.H., Booker, L.B., Colombetti, M., Dorigo, M., Goldberg, D.E., Forrest, S., Riolo, R.L., Smith, R.E., Lanzi, P.L., Stolzmann, W., Wilson, S.W.: What is a learning classifier system? In: Lanzi, P.L., Stolzmann, W., Wilson, S.W. (eds.) IWLCS 1999. LNCS (LNAI), vol. 1813, pp. 3–32. Springer, Heidelberg (2000)Holmes, J.H., Lanzi, P., Stolzmann, W.: Learning classifier systems: New models, successful applications. Information Processing Letters (2002)Kitzelmann, E.: Inductive programming: A survey of program synthesis techniques. In: Schmid, U., Kitzelmann, E., Plasmeijer, R. (eds.) AAIP 2009. LNCS, vol. 5812, pp. 50–73. Springer, Heidelberg (2010)Koller, D., Sahami, M.: Hierarchically classifying documents using very few words. In: Proceedings of the Fourteenth International Conference on Machine Learning, ICML 1997, pp. 170–178. Morgan Kaufmann Publishers Inc., San Francisco (1997)Lafferty, J., McCallum, A.: Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In: ICML 2001, pp. 282–289 (2001)Lloyd, J.W.: Knowledge representation, computation, and learning in higher-order logic (2001)Maes, F., Denoyer, L., Gallinari, P.: Structured prediction with reinforcement learning. Machine Learning Journal 77(2-3), 271–301 (2009)Martínez-Plumed, F., Estruch, V., Ferri, C., Hernández-Orallo, J., Ramírez-Quintana, M.J.: Newton trees. In: Li, J. (ed.) AI 2010. LNCS, vol. 6464, pp. 174–183. Springer, Heidelberg (2010)Muggleton, S.: Inverse entailment and Progol. New Generation Computing (1995)Muggleton, S.H.: Inductive logic programming: Issues, results, and the challenge of learning language in logic. Artificial Intelligence 114(1-2), 283–296 (1999)Plotkin, G.: A note on inductive generalization. Machine Intelligence 5 (1970)Schmidhuber, J.: Optimal ordered problem solver. Maching Learning 54(3), 211–254 (2004)Srinivasan, A.: The Aleph Manual (2004)Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press (1998)Tadepalli, P., Givan, R., Driessens, K.: Relational reinforcement learning: An overview. In: Proc. of the Workshop on Relational Reinforcement Learning (2004)Tamaddoni-Nezhad, A., Muggleton, S.: A genetic algorithms approach to ILP. In: Matwin, S., Sammut, C. (eds.) ILP 2002. LNCS (LNAI), vol. 2583, pp. 285–300. Springer, Heidelberg (2003)Tsochantaridis, I., Hofmann, T., Joachims, T., Altun, Y.: Support vector machine learning for interdependent and structured output spaces. In: ICML (2004)Wallace, C.S., Dowe, D.L.: Refinements of MDL and MML coding. Comput. J. 42(4), 330–337 (1999)Watkins, C., Dayan, P.: Q-learning. Machine Learning 8, 279–292 (1992

    A computational analysis of general intelligence tests for evaluating cognitive development

    Full text link
    [EN] The progression in several cognitive tests for the same subjects at different ages provides valuable information about their cognitive development. One question that has caught recent interest is whether the same approach can be used to assess the cognitive development of artificial systems. In particular, can we assess whether the fluid or crystallised intelligence of an artificial cognitive system is changing during its cognitive development as a result of acquiring more concepts? In this paper, we address several IQ tests problems (odd-one-out problems, Raven s Progressive Matrices and Thurstone s letter series) with a general learning system that is not particularly designed on purpose to solve intelligence tests. The goal is to better understand the role of the basic cognitive perational constructs (such as identity, difference, order, counting, logic, etc.) that are needed to solve these intelligence test problems and serve as a proof-of-concept for evaluation in other developmental problems. From here, we gain some insights into the characteristics and usefulness of these tests and how careful we need to be when applying human test problems to assess the abilities and cognitive development of robots and other artificial cognitive systems.This work has been partially supported by the EU (FEDER) and the Spanish MINECO under grants TIN 2015-69175-C4-1-R and TIN 2013-45732-C4-1-P, and by Generalitat Valenciana under grant PROMETEOII/2015/013.Martínez-Plumed, F.; Ferri Ramírez, C.; Hernández-Orallo, J.; Ramírez Quintana, MJ. (2017). A computational analysis of general intelligence tests for evaluating cognitive development. Cognitive Systems Research. 43:100-118. https://doi.org/10.1016/j.cogsys.2017.01.006S1001184

    Can language models automate data wrangling?

    Full text link
    [ES] La automatización de la ciencia de datos y otros procesos de manipulación de datos dependen de la integración y el formateo de los datos "desordenados". La manipulación de datos es un término que engloba estas tareas tediosas y que requieren mucho tiempo. Tareas como la transformación de fechas, unidades o nombres expresados en diferentes formatos han sido un reto para el aprendizaje automático porque los usuarios esperan resolverlas con pistas cortas o pocos ejemplos, y los problemas dependen en gran medida del conocimiento del dominio. Curiosamente, los grandes modelos lingüísticos de hoy en día infieren a partir de muy pocos ejemplos o incluso de una breve pista en lenguaje natural, e integran grandes cantidades de conocimiento del dominio. Por tanto, es una cuestión de investigación importante analizar si los modelos de lenguaje son un enfoque prometedor para la gestión de datos, especialmente porque sus capacidades siguen creciendo. En este artículo aplicamos diferentes variantes de modelos lingüísticos de GPT a problemas de gestión de datos, comparando sus resultados con los de herramientas especializadas de gestión de datos, y analizando también las tendencias, variaciones y nuevas posibilidades y riesgos de los modelos lingüísticos en esta tarea. Nuestro principal hallazgo es que parecen ser una herramienta poderosa para una amplia gama de tareas de búsqueda de datos, pero la fiabilidad puede ser un problema importante a superar.[EN] The automation of data science and other data manipulation processes depend on the integration and formatting of ‘messy’ data. Data wran gling is an umbrella term for these tedious and time-consuming tasks. Tasks such as transforming dates, units or names expressed in different formats have been challenging for machine learning because users expect to solve them with short cues or few examples, and the problems depend heavily on domain knowledge. Interestingly, large language models today infer from very few examples or even a short clue in natural language, and integrate vast amounts of domain knowledge. It is then an important research question to analyse whether language models are a promising approach for data wrangling, especially as their capabilities continue growing. In this paper we apply different language model variants of GPT to data wrangling problems, comparing their results to specialised data wrangling tools, also analysing the trends, variations and further possibilities and risks of language models in this task. Our major finding is that they appear as a powerful tool for a wide range of data wrangling tasks, but reliability may be an important issue to overcome.Jaimovitch-López, G.; Ferri, C.; Hernández-Orallo, J.; Martínez-Plumed, F.; Ramírez-Quintana, MJ. (2021). Can language models automate data wrangling?. http://hdl.handle.net/10251/18502

    Can language models automate data wrangling?

    Full text link
    [EN] The automation of data science and other data manipulation processes depend on the integration and formatting of 'messy' data. Data wrangling is an umbrella term for these tedious and time-consuming tasks. Tasks such as transforming dates, units or names expressed in different formats have been challenging for machine learning because (1) users expect to solve them with short cues or few examples, and (2) the problems depend heavily on domain knowledge. Interestingly, large language models today (1) can infer from very few examples or even a short clue in natural language, and (2) can integrate vast amounts of domain knowledge. It is then an important research question to analyse whether language models are a promising approach for data wrangling, especially as their capabilities continue growing. In this paper we apply different variants of the language model Generative Pre-trained Transformer (GPT) to five batteries covering a wide range of data wrangling problems. We compare the effect of prompts and few-shot regimes on their results and how they compare with specialised data wrangling systems and other tools. Our major finding is that they appear as a powerful tool for a wide range of data wrangling tasks. We provide some guidelines about how they can be integrated into data processing pipelines, provided the users can take advantage of their flexibility and the diversity of tasks to be addressed. However, reliability is still an important issue to overcome.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. This work was funded by the Future of Life Institute, FLI, under grant RFP2-152, the MIT-Spain - INDITEX Sustainability Seed Fund under project COST-OMIZE, the EU (FEDER) and Spanish MINECO under RTI2018-094403-B-C32 and PID2021-122830OB-C42, Generalitat Valenciana under PROMETEO/2019/098 and INNEST/2021/317, EU's Horizon 2020 research and innovation programme under grant agreement No. 952215 (TAILOR) and US DARPA HR00112120007 ReCOG-AI. AcknowledgementsWe thank Lidia Contreras for her help with the Data Wrangling Dataset Repository. We thank the anonymous reviewers from ECMLPKDD Workshop on Automating Data Science (ADS2021) and the anonymous reviewers of this special issue for their comments.Jaimovitch-López, G.; Ferri Ramírez, C.; Hernández-Orallo, J.; Martínez-Plumed, F.; Ramírez Quintana, MJ. (2023). Can language models automate data wrangling?. Machine Learning. 112(6):2053-2082. https://doi.org/10.1007/s10994-022-06259-920532082112

    Analysis of Arterial Blood Gas Values Based on Storage Time Since Sampling: An Observational Study

    Get PDF
    [Abstract] Aim: To evaluate the influence of time on arterial blood gas values after artery puncture is performed. Method: Prospective longitudinal observational study carried out with gasometric samples from 86 patients, taken at different time intervals (0 (T0), 15 (T15), 30 (T30) and 60 (T60) min), from 21 October 2019 to 21 October 2020. The study variables were: partial pressure of carbon dioxide, bicarbonate, hematocrit, hemoglobin, potassium, lactic acid, pH, partial pressure of oxygen, saturation of oxygen, sodium and glucose. Results: The initial sample consisted of a total of 90 patients. Out of all the participants, four were discarded as they did not understand the purpose of the study; therefore, the total number of participants was 86, 51% of whom were men aged 72.59 on average (SD: 16.23). In the intra-group analysis, differences in PCO2, HCO3, hematocrit, Hb, K+ and and lactic acid were observed between the initial time of the test and the 15, 30 and 60 min intervals. In addition, changes in pH, pO2, SO2, Na and glucose were noted 30 min after the initial sample had been taken. Conclusions: The variation in the values, despite being significant, has no clinical relevance. Consequently, the recommendation continues to be the analysis of the GSA at the earliest point to ensure the highest reliability of the data and to provide the patient with the most appropriate treatment based on those results

    Cervical pregnancy in assisted reproduction: an analysis of risk factors in 91,067 ongoing pregnancies

    Get PDF
    OBJECTIVE: To assess the frequency of cervical pregnancy (CP) in women undergoing assisted reproductive techniques (ART) and to ascertain its risk factors DESIGN: Case-control study. Two control groups were established: tubal ectopic pregnancies and intrauterine pregnancies. SETTING: 25 private assisted reproduction clinics run by the same group in Spain PATIENT(S): Women undergoing ART (artificial insemination, or IVF with own or donor oocytes). INTERVENTION(S): None. MAIN OUTCOME MEASURE(S): Frequency of CP. Ascertainment of demographic and clinical risk factors. Assessment of the influence of IVF parameters on CP risk. RESULT(S): There were 32 CPs out of 91,067 ongoing pregnancies, yielding a rate of 3.5/10,000. CPs represented 2.02% of all ectopic pregnancies (32/1582). The main risk factors were: ≥ 2 previous pregnancies (OR= 2.68; CI=1.18-6.07), ≥2 previous miscarriages (OR= 4.21, CI=1.7- 10.43), ≥ 2 previous curettages (OR=4.71; CI= (1.19-18.66) and smoking (OR= 2.82 (1.14-6.94). History of cesarean sections and tubal pregnancy were not associated with an elevated CP risk. Infertility conditions and endometrial thickness were similar across the three groups. The proportion of women from whom < 10 oocytes were retrieved was higher in the CP group than in either of the control groups. CONCLUSION(S): In ART, the main risk factors for ectopic pregnancy are a history of at least 2 pregnancies/miscarriages/curettages, and smoking. IVF parameters do not seem to influence the development of CP. CP is less common in ART than previously reported, likely attributable to improvements in ART, although a publication bias cannot be ruled out in early IVF reports
    • …
    corecore