38,858 research outputs found

    Induction of Non-Monotonic Logic Programs to Explain Boosted Tree Models Using LIME

    Full text link
    We present a heuristic based algorithm to induce \textit{nonmonotonic} logic programs that will explain the behavior of XGBoost trained classifiers. We use the technique based on the LIME approach to locally select the most important features contributing to the classification decision. Then, in order to explain the model's global behavior, we propose the LIME-FOLD algorithm ---a heuristic-based inductive logic programming (ILP) algorithm capable of learning non-monotonic logic programs---that we apply to a transformed dataset produced by LIME. Our proposed approach is agnostic to the choice of the ILP algorithm. Our experiments with UCI standard benchmarks suggest a significant improvement in terms of classification evaluation metrics. Meanwhile, the number of induced rules dramatically decreases compared to ALEPH, a state-of-the-art ILP system

    Surplus Identification with Non-Linear Returns. ESRI WP522. December 2015

    Get PDF
    We present evidence from two experiments designed to quantify the impact of cognitive constraints on consumers' ability to identify surpluses. Participants made repeated forced-choice decisions about whether products conferred surpluses, comparing one or two plainly perceptible attributes against displayed prices. Returns to attributes varied in linearity, scale and relative weight. Despite the apparent simplicity of this task, in which participants were incentivised and able to attend fully to all relevant information, surplus identification was surprisingly imprecise and subject to systematic bias. Performance was unaffected by monotonic non-linearities in returns, but non-monotonic non-linearities reduced the likelihood of detecting a surplus. Regardless of the shape of returns, learning was minimal and largely confined to initial exposures. Although product value was objectively determined, participants exhibited biases previously observed in subjective discrete choice, suggesting common cognitive mechanisms. These findings have implications for consumer choice models and for ongoing attempts to account for cognitive constraints in applied microeconomic contexts

    Constrained Monotonic Neural Networks

    Full text link
    Wider adoption of neural networks in many critical domains such as finance and healthcare is being hindered by the need to explain their predictions and to impose additional constraints on them. Monotonicity constraint is one of the most requested properties in real-world scenarios and is the focus of this paper. One of the oldest ways to construct a monotonic fully connected neural network is to constrain signs on its weights. Unfortunately, this construction does not work with popular non-saturated activation functions as it can only approximate convex functions. We show this shortcoming can be fixed by constructing two additional activation functions from a typical unsaturated monotonic activation function and employing each of them on the part of neurons. Our experiments show this approach of building monotonic neural networks has better accuracy when compared to other state-of-the-art methods, while being the simplest one in the sense of having the least number of parameters, and not requiring any modifications to the learning procedure or post-learning steps. Finally, we prove it can approximate any continuous monotone function on a compact subset of Rn\mathbb{R}^n
    • …
    corecore