39 research outputs found

    A very absolute Π21 real singleton

    Get PDF

    Delegating Infrastructure Projects with Open Access

    Get PDF
    This paper provides a simple model that examines a firmfs incentive to invest in a network infrastructure through coalition formation in an open access environment with a deregulated retail market. A regulator faces a dilemma between inducing an incentive for efficient investment and reducing the distortion generated by imperfect competition. We show that, in such a case, the degree of cost-reducing effect of the investment is crucial from a welfare point of view. In particular, when network investment through coalition formation creates a large (small) cost-reducing effect, the regulator can (should not) delegate an investment decision to firms with an appropriate level of access charge.Network infrastructure, Coalition, Access Charge, Delegation

    Some new results on decidability for elementary algebra and geometry

    Get PDF
    We carry out a systematic study of decidability for theories of (a) real vector spaces, inner product spaces, and Hilbert spaces and (b) normed spaces, Banach spaces and metric spaces, all formalised using a 2-sorted first-order language. The theories for list (a) turn out to be decidable while the theories for list (b) are not even arithmetical: the theory of 2-dimensional Banach spaces, for example, has the same many-one degree as the set of truths of second-order arithmetic. We find that the purely universal and purely existential fragments of the theory of normed spaces are decidable, as is the AE fragment of the theory of metric spaces. These results are sharp of their type: reductions of Hilbert's 10th problem show that the EA fragments for metric and normed spaces and the AE fragment for normed spaces are all undecidable.Comment: 79 pages, 9 figures. v2: Numerous minor improvements; neater proofs of Theorems 8 and 29; v3: fixed subscripts in proof of Lemma 3

    Liquidity risk and its measurement: modelling, analysis and computation

    No full text
    The recent turbulence in financial markets, of which a famous casualty is the collapse of the Long Term Capital Management hedge fund, has made market liquidity an issue of high concern to investors and risk managers. The latter group in particular realised that financial models, based on the assumption of perfectly liquid markets where investors can trade large amounts of assets without affecting their prices, may fail miserably under the circumstance where market liquidity vanishes. Understanding the robustness and reliability of models used for trading and risk management purposes is therefore crucially important in the risk analysis. Part I of this thesis studies liquidity risk and its measurement via mean reversion jump diffusion processes. An efficient Monte Carlo method is suggested to find approximate VaR and CVaR for all percentiles with one set of samples from the loss distribution, which applies to portfolios of securities as well as single security. Part II investigates the computational efficiency and flexibility of application of the FFT-based option pricing methodologies. First, an empirical testing of alternative twofactor stochastic volatility affine jump-diffusion models is conducted against an extensive S&P 500 index options data set, using a nonlinear ordinary least squares estimation framework. It is then shown how the two-dimensional FFT may be applied for the pricing of spread options, which have payoff functions and exercise regions that are nonlinear in the underlying log-asset prices. Furthermore, a non-affine four-factor stochastic volatility diffusion model is considered and an approximate CCF specification derived

    Underidentification?

    Get PDF
    We develop methods for testing the hypothesis that an econometric model is underidentified and inferring the nature of the failed identification. By adopting a generalized-method-of moments perspective, we feature directly the structural relations and we allow for nonlinearity in the econometric specification. We establish the link between a test for overidentification and our proposed test for underidentification. If, after attempting to replicate the structural relation, we find substantial evidence against the overidentifying restrictions of an augmented model, this is evidence against underidentification of the original model.

    SES gradient in psychological distress revisited: a dynamic perspective on the mediating effect of financial strain and mastery

    Get PDF
    There is a well-established literature that both psychological distress and mental disorders are linked to the gradient of socioeconomic status (SES). According to the stress process model or the life stress paradigm, SES could affect mental health in at least two ways: first, by creating situations where lower SES people tend to experience stressors in greater quantity; second, by enhancing (e.g., due to underexposure of stress for high SES people) or undermining (e.g., due to overexposure of stress for low SES people) coping resources that are beneficial to psychological wellbeing. While the stress process model or the life stress paradigm underscores an intra-personal process where changes in stress, resources, and distress are hypothesized to be inter-correlated within the same individual over time, most previous research on testing relevant hypotheses has been cross-sectional by design, focusing on between-person differences in stress, resources, and distress across the SES spectrum. Even among those exceptions that have collected data at multiple occasions in time, the prevailing analytic approaches have failed to take into account individual variations in the trajectories (either growth or decline) of stress, resources, and distress across time. This study extends previous research by using panel data and latent growth curve (LGC) modeling to examine the extent to which intra-individual changes in depressive symptoms are related to fluctuations in financial strain and mastery, which in turn, are conditioned by chronic level of income as a relatively stable SES attribute. This study also adds to previous research by investigating the causal sequence between psychological distress as indexed by depressive symptoms and a major form of personal resources as reflected in one\u27s sense of mastery, since they have appeared to be causally reciprocal in their strong inverse correlation with each other as part of the general sense of demoralization

    The Role of Information in Multi-Agent Decision Making

    Get PDF
    Networked multi-agent systems have become an integral part of many engineering systems. Collaborative decision making in multi-agent systems poses many challenges. In this thesis, we study the impact of information and its availability to agents on collaborative decision making in multi-agent systems. We consider the problem of detecting Markov and Gaussian models from observed data using two observers. We consider two Markov chains and two observers. Each observer observes a different function of the state of the true unknown Markov chain. Given the observations, the aim is to find which of the two Markov chains has generated the observations. We formulate block binary hypothesis testing problem for each observer and show that the decision for each observer is a function of the local likelihood ratio. We present a consensus scheme for the observers to agree on their beliefs and the asymptotic convergence of the consensus decision to the true hypothesis is proven. A similar problem framework is considered for the detection of Gaussian models using two observers. Sequential hypothesis testing problem is formulated for each observer and solved using local likelihood ratio. We present a consensus scheme taking into account the random and asymmetric stopping time of the observers. The notion of ``value of information" is introduced to understand the ``usefulness" of the information exchanged to achieve consensus. Next, we consider the binary hypothesis testing problem with two observers. There are two possible states of nature. There are two observers which collect observations that are statistically related to the true state of nature. The two observers are assumed to be synchronous. Given the observations, the objective of the observers is to collaboratively find the true state of nature. We consider centralized and decentralized approaches to solve the problem. In each approach there are two phases: (1) probability space construction: the true hypothesis is known, observations are collected to build empirical joint distributions between hypothesis and the observations; (2) given a new set of observations, hypothesis testing problems are formulated for the observers to find their individual beliefs about the true hypothesis. Consensus schemes for the observers to agree on their beliefs about the true hypothesis are presented. The rate of decay of the probability of error in the centralized approach and rate of decay of the probability of agreement on the wrong belief in the decentralized approach are compared. Numerical results comparing the centralized and decentralized approaches are presented. All propositions from the set of events for an agent in a multi-agent system might not be simultaneously verifiable. We study the concepts of \textit{event-state-operation structure} and \textit{relationship of incompatibility} from literature and use them as a tool to study the structure of the set of events. We present an example from multi-agent hypothesis testing where the set of events do not form a boolean algebra, but form an ortholattice. A possible construction of a 'noncommutative probability space', accounting for \textit{incompatible events} (events which cannot be simultaneously verified) is discussed. As a possible decision-making problem in such a probability space, we consider the binary hypothesis testing problem. We present two approaches to this decision-making problem. In the first approach, we represent the available data as coming from measurements modeled via projection valued measures (PVM) and retrieve the results of the underlying detection problem solved using classical probability models. In the second approach, we represent the measurements using positive operator valued measures (POVM). We prove that the minimum probability of error achieved in the second approach is the same as in the first approach. Finally, we consider the binary hypothesis testing problem with learning of empirical distributions. The true distributions of the observations under either hypothesis are unknown. Empirical distributions are estimated from observations. A sequence of detection problems is solved using the sequence of empirical distributions. The convergence of the information state and optimal detection cost under empirical distributions to the information state and optimal detection cost under the true distribution are shown. Numerical results on the convergence of optimal detection cost are presented
    corecore