158,759 research outputs found
The Relative Rigidity of Monopoly Pricing
This paper seeks to explain why monopolies keep their nominal prices constant for longer periods than do tight oligopolies. We provide two possible explanations. The first is based on the presence of a small fixed cost of changing prices. The second, on small costs of discovering the optimal price. The incentive to change price for duopolists producing differentiated products exceeds that of a single monopolistic firm which produced the same tange of products as the duopoly.
DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods.
Counterfactual explanations focus on 'actionable knowledge' to help end-users understand how a machine learning outcome could be changed to a more desirable outcome. For this purpose a counterfactual explainer needs to discover input dependencies that relate to outcome changes. Identifying the minimum subset of feature changes needed to action an output change in the decision is an interesting challenge for counterfactual explainers. The DisCERN algorithm introduced in this paper is a case-based counter-factual explainer. Here counterfactuals are formed by replacing feature values from a nearest unlike neighbour (NUN) until an actionable change is observed. We show how widely adopted feature relevance-based explainers (i.e. LIME, SHAP), can inform DisCERN to identify the minimum subset of 'actionable features'. We demonstrate our DisCERN algorithm on five datasets in a comparative study with the widely used optimisation-based counterfactual approach DiCE. Our results demonstrate that DisCERN is an effective strategy to minimise actionable changes necessary to create good counterfactual explanations
Explanation-Based Auditing
To comply with emerging privacy laws and regulations, it has become common
for applications like electronic health records systems (EHRs) to collect
access logs, which record each time a user (e.g., a hospital employee) accesses
a piece of sensitive data (e.g., a patient record). Using the access log, it is
easy to answer simple queries (e.g., Who accessed Alice's medical record?), but
this often does not provide enough information. In addition to learning who
accessed their medical records, patients will likely want to understand why
each access occurred. In this paper, we introduce the problem of generating
explanations for individual records in an access log. The problem is motivated
by user-centric auditing applications, and it also provides a novel approach to
misuse detection. We develop a framework for modeling explanations which is
based on a fundamental observation: For certain classes of databases, including
EHRs, the reason for most data accesses can be inferred from data stored
elsewhere in the database. For example, if Alice has an appointment with Dr.
Dave, this information is stored in the database, and it explains why Dr. Dave
looked at Alice's record. Large numbers of data accesses can be explained using
general forms called explanation templates. Rather than requiring an
administrator to manually specify explanation templates, we propose a set of
algorithms for automatically discovering frequent templates from the database
(i.e., those that explain a large number of accesses). We also propose
techniques for inferring collaborative user groups, which can be used to
enhance the quality of the discovered explanations. Finally, we have evaluated
our proposed techniques using an access log and data from the University of
Michigan Health System. Our results demonstrate that in practice we can provide
explanations for over 94% of data accesses in the log.Comment: VLDB201
Assumption 0 analysis: comparative phylogenetic studies in the age of complexity
Darwin's panoramic view of biology encompassed two metaphors: the phylogenetic tree, pointing to relatively linear (and divergent) complexity, and the tangled bank, pointing to reticulated (and convergent) complexity. The emergence of phylogenetic systematics half a century ago made it possible to investigate linear complexity in biology. Assumption 0, first proposed in 1986, is not needed for cases of simple evolutionary patterns, but must be invoked when there are complex evolutionary patterns whose hallmark is reticulated relationships. A corollary of Assumption 0, the duplication convention, was proposed in 1990, permitting standard phylogenetic systematic ontology to be used in discovering reticulated evolutionary histories. In 2004, a new algorithm, phylogenetic analysis for comparing trees (PACT), was developed specifically for use in analyses invoking Assumption 0. PACT can help discern complex evolutionary explanations for historical biogeographical, coevolutionary, phylogenetic, and tokogenetic processe
- …