38 research outputs found

    Invariant Manifolds for Competitive Systems in the Plane

    Full text link
    Let TT be a competitive map on a rectangular region RR2\mathcal{R}\subset \mathbb{R}^2, and assume TT is C1C^1 in a neighborhood of a fixed point xˉR\bar{\rm x}\in \mathcal{R}. The main results of this paper give conditions on TT that guarantee the existence of an invariant curve emanating from xˉ\bar{\rm x} when both eigenvalues of the Jacobian of TT at xˉ\bar{\rm x} are nonzero and at least one of them has absolute value less than one, and establish that C\mathcal{C} is an increasing curve that separates R\mathcal{R} into invariant regions. The results apply to many hyperbolic and nonhyperbolic cases, and can be effectively used to determine basins of attraction of fixed points of competitive maps, or equivalently, of equilibria of competitive systems of difference equations. Several applications to planar systems of difference equations with non-hyperbolic equilibria are given.Comment: 20 pages, 2 figure

    Partial Robustness in Team Formation: Bridging the Gap between Robustness and Resilience

    No full text
    Team formation is the problem of deploying the least expensive team of agents while covering a set of skills. Once a team has been formed, some of the agents considered at start may be finally defective and some skills may become uncovered. Two solution concepts have been recently introduced to deal with this issue in a proactive manner: one may form a team which is robust to changes so that after some agent losses, all skills remain covered; or one may opt for a recoverable team, i.e., it can be “repaired” in the worst case by hiring new agents while keeping the overall deployment cost minimal. In this paper, we introduce the problem of partially robust team formation (PR-TF). Partial robustness is a weaker form of robustness which guarantees a certain degree of skill coverage after some agents are lost. We analyze the computational complexity of PR-TF, and provide a complete algorithm for it. The performance of our algorithm is empirically compared with the existing methods for robust and recoverable team formation, on a number of existing benchmarks and some newly introduced ones. Partial robustness is shown to be an interesting trade-off notion between (full) robustness and recoverability in terms of computational efficiency, skill coverage guarantees after agent losses, and repairability.Algorithmic

    Fair and Optimal Decision Trees: A Dynamic Programming Approach

    No full text
    Interpretable and fair machine learning models are required for many applications, such as credit assessment and in criminal justice. Decision trees offer this interpretability, especially when they are small. Optimal decision trees are of particular interest because they offer the best performance possible for a given size. However, state-of-the-art algorithms for fair and optimal decision trees have scalability issues, often requiring several hours to find such trees even for small datasets. Previous research has shown that dynamic programming (DP) performs well for optimizing decision trees because it can exploit the tree structure. However, adding a global fairness constraint to a DP approach is not straightforward, because the global constraint violates the condition that subproblems should be independent. We show how such a constraint can be incorporated by introducing upper and lower bounds on final fairness values for partial solutions of subproblems, which enables early comparison and pruning. Our results show that our model can find fair and optimal trees several orders of magnitude faster than previous methods, and now also for larger datasets that were previously beyond reach. Moreover, we show that with this substantial improvement our method can find the full Pareto front in the trade-off between accuracy and fairness

    Learning Variable Activity Initialisation for Lazy Clause Generation Solvers

    No full text
    Contemporary research explores the possibilities of integrating machine learning (ML) approaches with traditional combinatorial optimisation solvers. Since optimisation hybrid solvers, which combine propositional satisfiability (SAT) and constraint programming (CP), dominate recent benchmarks, it is surprising that the literature has paid limited attention to machine learning approaches for hybrid CP–SAT solvers. We identify the technique of minimal unsatisfiable subsets as promising to improve the performance of the hybrid CP–SAT lazy clause generation solver Chuffed. We leverage a graph convolutional network (GCN) model, trained on an adapted version of the MiniZinc benchmark suite. The GCN predicts which variables belong to an unsatisfiable subset on CP instances; these predictions are used to initialise the activity score of Chuffed’s Variable-State Independent Decaying Sum (VSIDS) heuristic. We benchmark the ML-aided Chuffed on the MiniZinc benchmark suite and find a robust 2.5% gain over baseline Chuffed on MRCPSP instances. This paper thus presents the first, to our knowledge, successful application of machine learning to improve hybrid CP–SAT solvers, a step towards improved automatic solving of CP models.Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Algorithmic

    Fair and Optimal Decision Trees: A Dynamic Programming Approach

    No full text
    Interpretable and fair machine learning models are required for many applications, such as credit assessment and in criminal justice. Decision trees offer this interpretability, especially when they are small. Optimal decision trees are of particular interest because they offer the best performance possible for a given size. However, state-of-the-art algorithms for fair and optimal decision trees have scalability issues, often requiring several hours to find such trees even for small datasets. Previous research has shown that dynamic programming (DP) performs well for optimizing decision trees because it can exploit the tree structure. However, adding a global fairness constraint to a DP approach is not straightforward, because the global constraint violates the condition that subproblems should be independent. We show how such a constraint can be incorporated by introducing upper and lower bounds on final fairness values for partial solutions of subproblems, which enables early comparison and pruning. Our results show that our model can find fair and optimal trees several orders of magnitude faster than previous methods, and now also for larger datasets that were previously beyond reach. Moreover, we show that with this substantial improvement our method can find the full Pareto front in the trade-off between accuracy and fairness.Algorithmic

    Learning Variable Activity Initialisation for Lazy Clause Generation Solvers

    No full text
    Contemporary research explores the possibilities of integrating machine learning (ML) approaches with traditional combinatorial optimisation solvers. Since optimisation hybrid solvers, which combine propositional satisfiability (SAT) and constraint programming (CP), dominate recent benchmarks, it is surprising that the literature has paid limited attention to machine learning approaches for hybrid CP–SAT solvers. We identify the technique of minimal unsatisfiable subsets as promising to improve the performance of the hybrid CP–SAT lazy clause generation solver Chuffed. We leverage a graph convolutional network (GCN) model, trained on an adapted version of the MiniZinc benchmark suite. The GCN predicts which variables belong to an unsatisfiable subset on CP instances; these predictions are used to initialise the activity score of Chuffed’s Variable-State Independent Decaying Sum (VSIDS) heuristic. We benchmark the ML-aided Chuffed on the MiniZinc benchmark suite and find a robust 2.5% gain over baseline Chuffed on MRCPSP instances. This paper thus presents the first, to our knowledge, successful application of machine learning to improve hybrid CP–SAT solvers, a step towards improved automatic solving of CP models

    Algorithms for partially robust team formation

    No full text
    In one of its simplest forms, Team Formation involves deploying the least expensive team of agents while covering a set of skills. While current algorithms are reasonably successful in computing the best teams, the resilience to change of such solutions remains an important concern: Once a team has been formed, some of the agents considered at start may be finally defective and some skills may become uncovered. Two recently introduced solution concepts deal with this issue proactively: 1) form a team which is robust to changes so that after some agent losses, all skills remain covered, and 2) opt for a recoverable team, i.e., it can be "repaired" in the worst case by hiring new agents while keeping the overall deployment cost minimal. In this paper, we introduce the problem of partially robust team formation (PR–TF). Partial robustness is a weaker form of robustness which guarantees a certain degree of skill coverage after some agents are lost. We analyze the computational complexity of PR-TF and provide two complete algorithms for it. We compare the performance of our algorithms with the existing methods for robust and recoverable team formation on several existing and newly introduced benchmarks. Our empirical study demonstrates that partial robustness offers an interesting trade-off between (full) robustness and recoverability in terms of computational efficiency, skill coverage guaranteed after agent losses and repairability. This paper is an extended and revised version of as reported by (Schwind et al., Proceedings of the 20th International Conference on Autonomous Agents and Multiagent Systems (AAMAS’21), pp. 1154–1162, 2021).Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Algorithmic

    Expression and prognostic role of syndecan-2 in prostate cancer

    Get PDF
    Syndecans are a four-member family of transmembrane heparan sulphate proteoglycans that have different functions in cell signalling, adhesion, cytoskeleton organization, migration, proliferation, and angiogenesis. Several studies investigated the role of syndecan-2 (SDC2) in different carcinomas; however, only one being focused on SDC2 in prostate cancer. SDC2 expression and relationship with established prognostic features were assessed in a cohort of 86 patients treated with radical prostatectomy for clinically localized prostate adenocarcinoma. SDC2 expression was present in the majority of prostate cancers and absent in only 11.6% of cases. SDC2 expression was also recorded in cells of prostatic intraepithelial neoplasia, whereas normal prostatic epithelial tissue and stroma did not express SDC2. SDC2 overexpression in prostate cancer was significantly associated with established features indicative of worse prognosis such as higher preoperative PSA (P=0.011), higher Gleason score (P<0.001), positive surgical margins (P<0.003), and extraprostatic extension of disease (P<0.003). Moreover, expression of SDC2 was also associated with biochemical disease progression on univariate analysis (P<0.001). Study results supported the potential role of SDC2 in prostatic carcinogenesis and cancer progression. Moreover, SDC2 could serve as an additional prognostic marker that might help in further stratifying the risk of disease progression in patients with prostate cancer

    MurTree: Optimal Decision Trees via Dynamic Programming and Search

    No full text
    Decision tree learning is a widely used approach in machine learning, favoured in applications that require concise and interpretable models. Heuristic methods are traditionally used to quickly produce models with reasonably high accuracy. A commonly criticised point, however, is that the resulting trees may not necessarily be the best representation of the data in terms of accuracy and size. In recent years, this motivated the development of optimal classification tree algorithms that globally optimise the decision tree in contrast to heuristic methods that perform a sequence of locally optimal decisions. We follow this line of work and provide a novel algorithm for learning optimal classification trees based on dynamic programming and search. Our algorithm supports constraints on the depth of the tree and number of nodes. The success of our approach is attributed to a series of specialised techniques that exploit properties unique to classification trees. Whereas algorithms for optimal classification trees have traditionally been plagued by high runtimes and limited scalability, we show in a detailed experimental study that our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances, providing several orders of magnitude improvements and notably contributing towards the practical use of optimal decision trees.</p
    corecore