849 research outputs found

    Derivation of Monotone Decision Models from Non-Monotone Data

    Get PDF
    The objective of data mining is the extraction of knowledge from databases. In practice, one often encounters difficulties with models that are constructed purely by search, without incorporation of knowledge about the domain of application.In economic decision making such as credit loan approval or risk analysis, one often requires models that are monotone with respect to the decision variables involved.If the model is obtained by a blind search through the data, it does mostly not have this property even if the underlying database is monotone.In this paper, we present methods to enforce monotonicity of decision models.We propose measures to express the degree of monotonicity of the data and an algorithm to make data sets monotone.In addition, it is shown that monotone decision trees derived from cleaned data perform better compared to trees derived from raw data.decision models;knowledge;decision theory;operational research;data mining

    The Intuitive Appeal of Explainable Machines

    Get PDF
    Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties. Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible. In most cases, intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself

    Sampling Correctors

    Full text link
    In many situations, sample data is obtained from a noisy or imperfect source. In order to address such corruptions, this paper introduces the concept of a sampling corrector. Such algorithms use structure that the distribution is purported to have, in order to allow one to make "on-the-fly" corrections to samples drawn from probability distributions. These algorithms then act as filters between the noisy data and the end user. We show connections between sampling correctors, distribution learning algorithms, and distribution property testing algorithms. We show that these connections can be utilized to expand the applicability of known distribution learning and property testing algorithms as well as to achieve improved algorithms for those tasks. As a first step, we show how to design sampling correctors using proper learning algorithms. We then focus on the question of whether algorithms for sampling correctors can be more efficient in terms of sample complexity than learning algorithms for the analogous families of distributions. When correcting monotonicity, we show that this is indeed the case when also granted query access to the cumulative distribution function. We also obtain sampling correctors for monotonicity without this stronger type of access, provided that the distribution be originally very close to monotone (namely, at a distance O(1/log⁥2n)O(1/\log^2 n)). In addition to that, we consider a restricted error model that aims at capturing "missing data" corruptions. In this model, we show that distributions that are close to monotone have sampling correctors that are significantly more efficient than achievable by the learning approach. We also consider the question of whether an additional source of independent random bits is required by sampling correctors to implement the correction process

    Compressive Sensing Technique for Mitigating Nonlinear Memory Effects in Radar Receivers

    Get PDF

    Transmission phase lapses in quantum dots: the role of dot-lead coupling asymmetry

    Full text link
    Lapses of transmission phase in transport through quantum dots are ubiquitous already in the absence of interaction, in which case their precise location is determined by the signs and magnitudes of the tunnelling matrix elements. However, actual measurements for a quantum dot embedded in an Aharonov-Bohm interferometer show systematic sequences of phase lapses separated by Coulomb peaks -- an issue that attracted much attention and generated controversy. Using a two-level quantum dot as an example we show that this phenomenon can be accounted for by the combined effect of asymmetric dot-lead couplings (left lead/right lead asymmetry as well as different level broadening for different levels) and interaction-induced "population switching" of the levels, rendering this behaviour generic. We construct and analyse a mean field scheme for an interacting quantum dot, and investigate the properties of the mean field solution, paying special attention to the character of its dependence (continuous vs. discontinuous) on the chemical potential or gate voltage.Comment: 34 LaTeX pages in IOP format, 9 figures; misprints correcte

    Derivation of Monotone Decision Models from Non-Monotone Data

    Get PDF
    The objective of data mining is the extraction of knowledge from databases. In practice, one often encounters difficulties with models that are constructed purely by search, without incorporation of knowledge about the domain of application.In economic decision making such as credit loan approval or risk analysis, one often requires models that are monotone with respect to the decision variables involved.If the model is obtained by a blind search through the data, it does mostly not have this property even if the underlying database is monotone.In this paper, we present methods to enforce monotonicity of decision models.We propose measures to express the degree of monotonicity of the data and an algorithm to make data sets monotone.In addition, it is shown that monotone decision trees derived from cleaned data perform better compared to trees derived from raw data.
    • 

    corecore