22 research outputs found

    Data and code for the paper New heuristic to choose a cylindrical algebraic decomposition variable ordering motivated by complexity analysis

    No full text
    This repository contains all the data and code necessary to generate the results and figures presented in the 2022 CASC paper of Tereso del Río and Matthew England. It contains a dataset in a .csv file, a .tex describing the dataset and seven .py files containing the code used to analyse the dataset. To generate the figures in the paper and the dataset used to create the table in the paper run the Python code named 'run_for_paper.py' inside the folder 'Code'

    Research data supporting the thesis "Systems level analysis of non-model organisms: a tool for understanding environmental stress"

    No full text
    Data created as part of PhD research project and contain Tests results describing site-effects for mussels sampled from an industrial harbour. This process is based on challenging the model created from reference site data

    Explainable AI Insights for Symbolic Computation: A case study on selecting the variable ordering for cylindrical algebraic decomposition

    No full text
    This toolbox supports the results in the following publication: Pickering, L., del Río, T., England, M. and Cohen, K., 2023. Explainable AI Insights for Symbolic Computation: A case study on selecting the variable ordering for cylindrical algebraic decomposition. arXiv preprint arXiv:2304.12154. Abstract: In recent years there has been increased use of machine learning (ML) techniques within mathematics, including symbolic computation where it may be applied safely to optimise or select algorithms. This paper explores whether using explainable AI (XAI) techniques on such ML models can offer new insight for symbolic computation, inspiring new implementations within computer algebra systems that do not directly call upon AI tools. We present a case study on the use of ML to select the variable ordering for cylindrical algebraic decomposition. It has already been demonstrated that ML can make the choice well, but here we show how the SHAP tool for explainability can be used to inform new heuristics of a size and complexity similar to those human-designed heuristics currently commonly used in symbolic computation
    corecore