12 research outputs found

    Information Design and Sensitivity to Market Fundamentals

    Get PDF
    I study the problem of firms that disclose verifiable information to each other publicly, in the form of Blackwell Experiments, before engaging in strategic decisions. The signals designed can be either interpreted as statistical reports or as slices of physical quantities, i.e. market segments. Before the state of the world is realized, firms choose a signal policy, an estimation technique, about a private individual payoff state and then are forced to publicize the results of the investigations to all other firms before engaging in price or quantity competition. Because signals are made public, when a firm tries to assess the firm's individual payoff, it also ends up revealing the same information to her opponents. Full Disclosure enables companies to adapt to local market fundamentals at the expense of releasing crucial information to the competitors. On the other hand, Partial Revelation makes companies loose optimality of the decisions with regards to the true state of the world but enable them to commit to an aggressive policy of preclusion that increases the frequency of a favorable distribution of players actions. Whereas Partial Revelation acts as a commitment device and preclude entry in otherwise competitive markets, inducing insensitivity of the decisions with respect to local fundamentals, decentralized decision making is a dominant strategy when the profile of competitors is constant across markets or when a company cannot influence the extensive margin entry decision of the competitor with more or less disclosure of information. Since decentralization acts as a way to correlate decisions with local market fundamentals, and running one single policy in multiple states of the world acts as a commitment device to avoid competitors, I describe a trade off between commitment over a distribution of actions versus correlation with states of the world

    Count data models: an application to the demand for health care

    No full text
    Este trabalho tem por objetivo identificar e quantificar o efeito dos determinantes para a demanda de consultas médicas no Brasil. Utilizando a base de dados do suplemento sobre saúde da PNAD de 2003, foram estimados modelos de contagem e comparados segundo critérios estatísticos. O modelo escolhido foi o Hurdle Binomial Negativo, que guarda relação com a teoria do agente principal aplicada à Economia da Saúde. Na especificação foram consideradas variáveis socioeconômicas como renda, gênero, idade, escolaridade, raça e região, e variáveis de saúde, como informação sobre morbidades, auto-avaliação da saúde, tipo de provimento (público ou privado) e cobertura de seguro-saúde. Além disso, estimou-se um preço sombra para o setor público de forma a incorporar a variável preço dos serviços de saúde na estimação. De forma geral, concluiu-se que existem iniqüidades no acesso à saúde da população favorecendo os indivíduos de renda elevada.The present work intends to identify and to measure the effect of some drivers for the healthcare demand. Using the database of the health supplement of the national household survey of the year 2003, several count data models were compared. According to some statistical criteria, the Hurdle Negative Binomial model was considered the best and was founded to be related to the agency theory applied to Health Economics. The specification was composed by social and economic variables such as income, gender, age, education, race and region and by health related variables, like morbidity, self assessed health, health system and insurance status. Moreover, it was estimated a shadow price for public sector in order to incorporate the price of health services in the healthcare frequency model. In general, it was founded iniquities in the access to the healthcare services

    Esparcidade dinâmica em matrizes de covariância no tempo via decomposição de Cholesky

    No full text
    No presente trabalho são apresentados diversos métodos de seleção de variáveis e encolhimento para modelos lineares dinâmicos Gaussianos sob a perspectiva Bayesiana. Em particular, propomos um novo método o qual induz esparcidade dinâmica em modelos de regressão linear com coeficientes variantes no tempo. Isso é feito através da especificação de prioris spike-and-slab para as variâncias dos coeficientes de variação do tempo, estendendo o trabalho anterior de Ishwaran and Rao (2005) A abordagem é semelhante ao processo definido em Kalli and Griffin (2014), no entanto, assumimos uma estrutura Markov switching para as variâncias ao invés de um processo Gama auto regressivo. Além disso, investigamos diferentes priores, incluindo uma mistura de distribuições Gama Inversa, bastante utilizada para variâncias, além de outras misturas de distribuições, como a Gama, que gera a priori conhecida como Normal-Gama para os coeficientes (Griffin et al. (2010)). Nesse sentido, o modelo proposto pode ser visto como uma seleção de variável dinâmica em que os coeficientes podem assumir valores diferentes de zero seguindo uma distribuição mais dispersa (através do slab) ou encolhimento em direção a zero (através do spike) em cada ponto do tempo. O esquema MCMC usa.do para. simular a. posteriori utiliza variáveis latentes Markovianas que podem assumir regimes binários em cada. ponto de tempo para gerar as variâncias dos coeficientes. Dessa forma, o modelo é|um modelo de mistura dinâmica, portanto, para gerar as variáveis latentes, utilizamos o algoritmo de Gerla.ch et al. (2000), que permite gerar essas variáveis sem condicionamento nos estados (coeficientes variantes no tempo). A abordagem é exemplificada através de exemplos simulados e urna aplicação de dados reaisIn the present work, we consider variable selection and shrinkage for Gaussian Dynamic Linear Models (DLM) within a Bayesian framework. ln particular, we propose a novel method that accommodates time-varying sparsity, based on an extension of spike-and-slab priors for dynamic models. This is done by assigning appropriate priors for the time-varying coefficients' variances, extending the previous work of Ishwaran and Rao (2005). Our approach is similar to the Normal Gamma Autoregressive (NCAR) process of Kalli and Griffin (2014), nevertheless, we assume a Markov switching structure for the process variances instead of a Gamma Autoregressive (GAR) process. Furthermore, we investigate different priors, including the common Inverted gamma prior for the process variances, and other mixture prior distributions such as Gamma priors for both the spike and the slab, which leads to a mixture of Normal-Gammas priors (Griffin d al. (2010)) for the coefficients and also different distributions for the spike and the slab, ln this sense, our prior can be view as a dynamic variable selection prior which induces either smoothness (through the slab) or shrinkage towards zero (through the spike) at each time point. The MCMC method used for posterior computation uses Markov latent variables that can assume binary regimes at each time point to generate the coefficients' variances. ln that way, our model is a dynamic mixture model, thus, we could use the algorithm of Gerlach et al. (2000) to generate the latent processes without conditioning on the states. Finally, our approach is exemplified through simulated examples and a real data applicatio
    corecore