38 research outputs found

    A non-technical guide to instrumental variables and regressor-error dependencies (in Russian)

    Get PDF
    We provide a non-technical summary of most of the recent results that have appeared in the econometric literature on instrumental variables estimation for the linear regression model. Standard inferential methods, such as OLS, are biased and inconsistent when the regressors are correlated with the error term. Instrumental variables methods were developed to overcome this problem, but finding instruments of good quality is cumbersome in any given situation and empirical researchers are often confronted with weak instruments. We review most of the recent studies on weak instruments and point to several methods that have been proposed to deal with such instruments, including "frugal" IV alternatives that do not rely on observed instruments to identify the regression parameters in presence of regressor-error dependencies.

    Solving and Testing for Regressor-Error (in)Dependence When no Instrumental Variables are Available: With New Evidence for the Effect of Education on Income

    Full text link
    This paper has two main contributions. Firstly, we introduce a new approach, the latent instrumental variables (LIV) method, to estimate regression coefficients consistently in a simple linear regression model where regressor-error correlations (endogeneity) are likely to be present. The LIV method utilizes a discrete latent variable model that accounts for dependencies between regressors and the error term. As a result, additional ‘valid’ observed instrumental variables are not required. Furthermore, we propose a specification test based on Hausman (1978) to test for these regressor-error correlations. A simulation study demonstrates that the LIV method yields consistent estimates and the proposed test-statistic has reasonable power over a wide range of regressor-error correlations and several distributions of the instruments.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/47579/1/11129_2005_Article_1177.pd

    Addressing Endogeneity in International Marketing Applications of Partial Least Squares Structural Equation Modeling

    Get PDF
    Partial least squares structural equation modeling (PLS-SEM) has become a key method in international marketing research. Users of PLS-SEM have, however, largely overlooked the issue of endogeneity, which has become an integral component of regression analysis applications. This lack of attention is surprising because the PLS-SEM method is grounded in regression analysis, for which numerous approaches for handling endogeneity have been proposed. To identify and treat endogeneity, and create awareness of how to deal with this issue, this study introduces a systematic procedure that translates control variables, instrumental variables, and Gaussian copulas into a PLS-SEM framework. We illustrate the procedure's efficacy by means of empirical data and offer recommendations to guide international marketing researchers on how to effectively address endogeneity concerns in their PLS-SEM analyses

    Latent instrumental variables : a new approach to solve for endogeneity

    Get PDF
    This thesis aims at resolving problems surrounding classical independence assumptions in mixed linear models. Those assumptions involve independence of the regressors and the random coefficients and independence of the regressors and the (model) error term. To tackle the dependence between regressors and error terms we develop a general instrumental variable approach, the latent instrumental variable (LIV) method, where the instruments are unobserved and are estimated from the data. This leads to a finite mixture formulation. We prove identifiability and discuss estimation of the model parameters. Furthermore, we propose methodologies to investigate regressor and error dependencies. We present results of various simulation studies and illustrate the LIV method on previously published datasets. Our simulation results show that the LIV method yields consistent estimates for the model parameters without having observable instrumental variables at hand. We reanalyze data of three studies that examine the effect of education on income, where the variable ‘education’ is potentially endogenous due to omitted ‘ability’ or other causes. In all three applications we find an upward bias in the OLS estimates of approximately 7%.

    Properties of Instrumental Variables Estimation in Logit-Based Demand Models: Finite Sample Results

    No full text
    Endogeneity problems in demand models occur when certain factors, unobserved by the researcher, affect both demand and the values of a marketing mix variable set by managers. For example, unobserved factors such as style, prestige, or reputation might result in higher prices for a product and higher demand for that product. If not addressed properly, endogeneity can bias the elasticities of the endogenous variable and subsequent optimization of the marketing mix. In practice, instrumental variables estimation techniques are often used to remedy an endogeneity problem. It is well known that, for linear regression models, the use of instrumental variables techniques with poor quality instruments can produce very poor parameter estimates, in some circumstances even worse than those that result from ignoring the endogeneity problem altogether. The literature has not addressed the consequences of using poor quality instruments to remedy endogeneity problems in nonlinear models, such as logit-based demand models. Using simulation methods, we investigate the effects of using poor quality instruments to remedy endogeneity in logit-based demand models applied to finite-sample datasets. The results show that, even when the conditions for lack of parameter identification due to poor quality instruments do not hold exactly, estimates of price elasticities can still be quite poor. That being the case, we investigate the relative performance of several nonlinear instrumental variables estimation procedures utilizing readily available instruments in finite samples. Our study highlights the attractiveness of the control function approach (Petrin and Train 2010) and readily-available instruments, which together reduce the mean squared elasticity errors substantially for experimental conditions in which the theory-backed instruments are poor in quality. We find important effects for sample size, in particular for the number of brands, for which it is shown that endogeneity problems are exacerbated with increases in the number of brands, especially when poor quality instruments are used. In addition, the number of stores is found to be important for likelihood ratio testing. The results of the simulation are shown to generalize to situations under Nash pricing in oligopolistic markets, to conditions in which cross-sectional preference heterogeneity exists, and to nested logit and probit-based demand specifications as well. Based on the results of the simulation, we suggest a procedure for managing a potential endogeneity problem in logit-based demand models

    Subgraph Sampling Methods for Social Networks: The Good, the Bad, and the Ugly

    No full text
    The trajectories of social processes (e.g., peer pressure, imitation, and assimilation) that take place on social networks depend on the structure of those networks. Thus, to understand a social process or to predict the associated outcomes accurately, marketers would need good knowledge of the social network structure. However, many social networks of relevance to marketers are large, complex, or hidden, making it prohibitively expensive to map out an entire social network. Instead, marketers often need to work with a sample (i.e., a subgraph) of a social network. In this paper we evaluate the efficacy of nine different sampling methods for generating subgraphs that recover four structural characteristics of importance to marketers, namely, the distributions of degree, clustering coefficient, betweenness centrality, and closeness centrality, which are important for understanding how social network structure influences outcomes of processes that take place on the network.Via extensive simulations, we find that sampling methods differ substantially in their ability to recover network characteristics. Traditional sampling procedures, such as random node sampling, result in poor subgraphs. When the focus is on understanding local network effects (e.g., peer influence) then forest fire sampling with a medium burn rate performs the best, i.e., it is most effective for recovering the distributions of degree and clustering coefficient. When the focus is on global network effects (e.g., speed of diffusion, identifying influential nodes, or the “multiplier” effects of network seeding), then random-walk sampling (i.e., forest-fire sampling with a low burn rate) performs the best, and it is most effective for recovering the distributions of betweenness and closeness centrality. Further, we show that accurate recovery of social network structure in a sample is important for inferring the properties of a network process, when one observes only the process in the sampled network. We validate our findings on four different real-world networks, including a Facebook network and a co-authorship network, and conclude with recommendations for practice

    Beyond the target customer:social effects of customer relationship management campaigns

    No full text
    Customer relationship management (CRM) campaigns have traditionally focused on maximizing the profitability of the targeted customers. The authors demonstrate that in business settings characterized by network externalities, a CRM campaign that is aimed at changing the behavior of specific customers propagates through the social network, thereby also affecting the behavior of nontargeted customers. Using a randomized field experiment involving nearly 6,000 customers of a mobile telecommunication provider, they find that the social connections of targeted customers increase their consumption and become less likely to churn, due to a campaign that was neither targeted at them nor offered them any direct incentives. The authors estimate a social multiplier of 1.28. That is, the effect of the campaign on first-degree connections of targeted customers is 28% of the effect of the campaign on the targeted customers. By further leveraging the randomized experimental design, the authors show that, consistent with a network externality account, the increase in activity among the nontargeted but connected customers is driven by the increase in communication between the targeted customers and their connections, making the local network of the nontargeted customers more valuable. These findings suggest that in targeting CRM marketing campaigns, firms should consider not only the profitability of the targeted customer but also the potential spillover of the campaign to nontargeted but connected customers

    Beyond the target customer:social effects of customer relationship management campaigns

    No full text
    Customer relationship management (CRM) campaigns have traditionally focused on maximizing the profitability of the targeted customers. The authors demonstrate that in business settings characterized by network externalities, a CRM campaign that is aimed at changing the behavior of specific customers propagates through the social network, thereby also affecting the behavior of nontargeted customers. Using a randomized field experiment involving nearly 6,000 customers of a mobile telecommunication provider, they find that the social connections of targeted customers increase their consumption and become less likely to churn, due to a campaign that was neither targeted at them nor offered them any direct incentives. The authors estimate a social multiplier of 1.28. That is, the effect of the campaign on first-degree connections of targeted customers is 28% of the effect of the campaign on the targeted customers. By further leveraging the randomized experimental design, the authors show that, consistent with a network externality account, the increase in activity among the nontargeted but connected customers is driven by the increase in communication between the targeted customers and their connections, making the local network of the nontargeted customers more valuable. These findings suggest that in targeting CRM marketing campaigns, firms should consider not only the profitability of the targeted customer but also the potential spillover of the campaign to nontargeted but connected customers

    Attribute-Level Heterogeneity

    No full text
    Modeling consumer heterogeneity helps practitioners understand market structures and devise effective marketing strategies. In this research we study finite mixture specifications for modeling consumer heterogeneity where each regression coefficient has its own finite mixture, that is, an attribute finite mixture model. An important challenge of such an approach to modeling heterogeneity lies in its estimation. A proposed Bayesian estimation approach, based on recent advances in reversible jump Markov Chain Monte Carlo (MCMC) methods, can estimate parameters for the attribute-based finite mixture model, assuming that the number of components for each finite mixture is a discrete random variable. An attribute specification has several advantages over traditional, vector-based, finite mixture specifications; specifically, the attribute mixture model offers a more appropriate aggregation of information than the vector specification facilitating estimation. In an extensive simulation study and an empirical application, we show that the attribute model can recover complex heterogeneity structures, making it dominant over traditional (vector) finite mixture regression models and a strong contender compared with mixture-of-normals models for modeling heterogeneity
    corecore