41 research outputs found

    Estimating Kullback-Leibler Divergence Using Kernel Machines

    Full text link
    Recently, a method called the Mutual Information Neural Estimator (MINE) that uses neural networks has been proposed to estimate mutual information and more generally the Kullback-Leibler (KL) divergence between two distributions. The method uses the Donsker-Varadhan representation to arrive at the estimate of the KL divergence and is better than the existing estimators in terms of scalability and flexibility. The output of MINE algorithm is not guaranteed to be a consistent estimator. We propose a new estimator that instead of searching among functions characterized by neural networks searches the functions in a Reproducing Kernel Hilbert Space. We prove that the proposed estimator is consistent. We carry out simulations and show that when the datasets are small the proposed estimator is more reliable than the MINE estimator and when the datasets are large the performance of the two methods are close

    Joint Concordance Index

    Full text link
    Existing metrics in competing risks survival analysis such as concordance and accuracy do not evaluate a model's ability to jointly predict the event type and the event time. To address these limitations, we propose a new metric, which we call the joint concordance. The joint concordance measures a model's ability to predict the overall risk profile, i.e., risk of death from different event types. We develop a consistent estimator for the new metric that accounts for the censoring bias. We use the new metric to develop a variable importance ranking approach. Using the real and synthetic data experiments, we show that models selected using the existing metrics are worse than those selected using joint concordance at jointly predicting the event type and event time. We show that the existing approaches for variable importance ranking often fail to recognize the importance of the event-specific risk factors, whereas, the proposed approach does not, since it compares risk factors based on their contribution to the prediction of the different event-types. To summarize, joint concordance is helpful for model comparisons and variable importance ranking and has the potential to impact applications such as risk-stratification and treatment planning in multimorbid populations

    Risk-Stratify: Confident Stratification Of Patients Based On Risk

    Full text link
    A clinician desires to use a risk-stratification method that achieves confident risk-stratification - the risk estimates of the different patients reflect the true risks with a high probability. This allows him/her to use these risks to make accurate predictions about prognosis and decisions about screening, treatments for the current patient. We develop Risk-stratify - a two phase algorithm that is designed to achieve confident risk-stratification. In the first phase, we grow a tree to partition the covariate space. Each node in the tree is split using statistical tests that determine if the risks of the child nodes are different or not. The choice of the statistical tests depends on whether the data is censored (Log-rank test) or not (U-test). The set of the leaves of the tree form a partition. The risk distribution of patients that belong to a leaf is different from the sibling leaf but not the rest of the leaves. Therefore, some of the leaves that have similar underlying risks are incorrectly specified to have different risks. In the second phase, we develop a novel recursive graph decomposition approach to address this problem. We merge the leaves of the tree that have similar risks to form new leaves that form the final output. We apply Risk-stratify on a cohort of patients (with no history of cardiovascular disease) from UK Biobank and assess their risk for cardiovascular disease. Risk-stratify significantly improves risk-stratification, i.e., a lower fraction of the groups have over/under estimated risks (measured in terms of false discovery rate; 33% reduction) in comparison to state-of-the-art methods for cardiovascular prediction (Random forests, Cox model, etc.). We find that the Cox model significantly over estimates the risk of 21,621 patients out of 216,211 patients. Risk-stratify can accurately categorize 2,987 of these 21,621 patients as low-risk individuals

    Dynamic Matching and Allocation of Tasks

    Full text link
    In many two-sided markets, the parties to be matched have incomplete information about their characteristics. We consider the settings where the parties engaged are extremely patient and are interested in long-term partnerships. Hence, once the final matches are determined, they persist for a long time. Each side has an opportunity to learn (some) relevant information about the other before final matches are made. For instance, clients seeking workers to perform tasks often conduct interviews that require the workers to perform some tasks and thereby provide information to both sides. The performance of a worker in such an interview- and hence the information revealed - depends both on the inherent characteristics of the worker and the task and also on the actions taken by the worker (e.g. the effort expended), which are not observed by the client. Thus there is moral hazard. Our goal is to derive a dynamic matching mechanism that facilitates learning on both sides before final matches are achieved and ensures that the worker side does not have incentive to obscure learning of their characteristics through their actions. We derive such a mechanism that leads to final matching that achieve optimal performance (revenue) in equilibrium. We show that the equilibrium strategy is long-run coalitionally stable, which means there is no subset of workers and clients that can gain by deviating from the equilibrium strategy. We derive all the results under the modeling assumption that the utilities of the agents are defined as limit of means of the utility obtained in each interaction

    Optimal Piecewise Local-Linear Approximations

    Full text link
    Existing works on "black-box" model interpretation use local-linear approximations to explain the predictions made for each data instance in terms of the importance assigned to the different features for arriving at the prediction. These works provide instancewise explanations and thus give a local view of the model. To be able to trust the model it is important to understand the global model behavior and there are relatively fewer works which do the same. Piecewise local-linear models provide a natural way to extend local-linear models to explain the global behavior of the model. In this work, we provide a dynamic programming based framework to obtain piecewise approximations of the black-box model. We also provide provable fidelity, i.e., how well the explanations reflect the black-box model, guarantees. We carry out simulations on synthetic and real datasets to show the utility of the proposed approach. At the end, we show that the ideas developed for our framework can also be used to address the problem of clustering for one-dimensional data. We give a polynomial time algorithm and prove that it achieves optimal clustering

    The user base dynamics of websites

    Full text link
    In this work we study for the first time the interaction between marketing and network effects. We build a model in which the online firm starts with an initial user base and controls the growth of the user base by choosing the intensity of advertisements and referrals to potential users. A large user base provides more profits to the online firm, but building a large user base through advertisements and referrals is costly; therefore, the optimal policy must balance the marginal benefits of adding users against the marginal costs of sending advertisements and referrals. Our work offers three main insights: (1) The optimal policy prescribes that a new online firm should offer many advertisements and referrals initially, but then it should decrease advertisements and referrals over time. (2) If the network effects decrease, then the change in the optimal policy depends heavily on two factors i) the level of patience of the online firm, where patient online firms are oriented towards long term profits and impatient online firms are oriented towards short term profits and, ii) the size of the user base. If the online firm is very patient (impatient) and if the network effects decrease, then the optimal policy prescribes it to be more (less) aggressive in posting advertisements and referrals at low user base levels and less (more) aggressive in posting advertisements and referrals at high user base levels. (3) The change in the optimal policy when network effects decrease also depends heavily on the heterogeneity in the user base, as measured in terms of the revenue generated by each user. An online firm that generates most of its revenue from a core group of users should be more aggressive and protective of its user base than a firm that generates revenue uniformly from its users

    Towards a Theory of Societal Co-Evolution: Individualism versus Collectivism

    Full text link
    Substantial empirical research has shown that the level of individualism vs. collectivism is one of the most critical and important determinants of societal traits, such as economic growth, economic institutions and health conditions. But the exact nature of this impact has thus far not been well understood in an analytical setting. In this work, we develop one of the first theoretical models that analytically studies the impact of individualism-collectivism on the society. We model the growth of an individual's welfare (wealth, resources and health) as depending not only on himself, but also on the level of collectivism, i.e. the level of dependence on the rest of the individuals in the society, which leads to a co-evolutionary setting. Based on our model, we are able to predict the impact of individualism-collectivism on various societal metrics, such as average welfare, average life-time, total population, cumulative welfare and average inequality. We analytically show that individualism has a positive impact on average welfare and cumulative welfare, but comes with the drawbacks of lower average life-time, lower total population and higher average inequality

    Efficient Interference Management Policies for Femtocell Networks

    Full text link
    Managing interference in a network of macrocells underlaid with femtocells presents an important, yet challenging problem. A majority of spatial (frequency/time) reuse based approaches partition the users based on coloring the interference graph, which is shown to be suboptimal. Some spatial time reuse based approaches schedule the maximal independent sets (MISs) in a cyclic, (weighted) round-robin fashion, which is inefficient for delay-sensitive applications. Our proposed policies schedule the MISs in a non-cyclic fashion, which aim to optimize any given network performance criterion for delay-sensitive applications while fulfilling minimum throughput requirements of the users. Importantly, we do not take the interference graph as given as in existing works; we propose an optimal construction of the interference graph. We prove that under certain conditions, the proposed policy achieves the optimal network performance. For large networks, we propose a low-complexity algorithm for computing the proposed policy. We show that the policy computed achieves a constant competitive ratio (with respect to the optimal network performance), which is independent of the network size, under wide range of deployment scenarios. The policy can be implemented in a decentralized manner by the users. Compared to the existing policies, our proposed policies can achieve improvement of up to 130 % in large-scale deployments

    Self-organizing Networks of Information Gathering Cognitive Agents

    Full text link
    In many scenarios, networks emerge endogenously as cognitive agents establish links in order to exchange information. Network formation has been widely studied in economics, but only on the basis of simplistic models that assume that the value of each additional piece of information is constant. In this paper we present a first model and associated analysis for network formation under the much more realistic assumption that the value of each additional piece of information depends on the type of that piece of information and on the information already possessed: information may be complementary or redundant. We model the formation of a network as a non-cooperative game in which the actions are the formation of links and the benefit of forming a link is the value of the information exchanged minus the cost of forming the link. We characterize the topologies of the networks emerging at a Nash equilibrium (NE) of this game and compare the efficiency of equilibrium networks with the efficiency of centrally designed networks. To quantify the impact of information redundancy and linking cost on social information loss, we provide estimates for the Price of Anarchy (PoA); to quantify the impact on individual information loss we introduce and provide estimates for a measure we call Maximum Information Loss (MIL). Finally, we consider the setting in which agents are not endowed with information, but must produce it. We show that the validity of the well-known "law of the few" depends on how information aggregates; in particular, the "law of the few" fails when information displays complementarities

    Evolution of Social Networks: A Microfounded Model

    Full text link
    Many societies are organized in networks that are formed by people who meet and interact over time. In this paper, we present a first model to capture the micro-foundations of social networks evolution, where boundedly rational agents of different types join the network; meet other agents stochastically over time; and consequently decide to form social ties. A basic premise of our model is that in real-world networks, agents form links by reasoning about the benefits that agents they meet over time can bestow. We study the evolution of the emerging networks in terms of friendship and popularity acquisition given the following exogenous parameters: structural opportunism, type distribution, homophily, and social gregariousness. We show that the time needed for an agent to find "friends" is influenced by the exogenous parameters: agents who are more gregarious, more homophilic, less opportunistic, or belong to a type "minority" spend a longer time on average searching for friendships. Moreover, we show that preferential attachment is a consequence of an emerging doubly preferential meeting process: a process that guides agents of a certain type to meet more popular similar-type agents with a higher probability, thereby creating asymmetries in the popularity evolution of different types of agents
    corecore