1,034,082 research outputs found

    On Formal Consistency between Value and Coordination Models

    Get PDF
    In information systems (IS) engineering dierent techniques for modeling inter-organizational collaborations are applied. In particular, value models estimate the profitability for involved stakeholders, whereas coordination models are used to agree upon the inter-organizational processes before implementing them. During the execution of inter-organizational collaboration, in addition, event logs are collected by the individual organizations representing another view of the IS. The combination of the two models and the event log represent the IS and they should therefore be consistent, i.e., not contradict each other. Since these models are provided by dierent user groups during design time and the event log is collected during run-time consistency is not straight forward. Inconsistency occurs when models contain a conflicting description of the same information, i.e., there exists a conflicting overlap between the models. In this paper we introduce an abstraction of value models, coordination models and event logs which allows ensuring and maintaining alignment between models and event log. We demonstrate its use by outlining a proof of an inconsistency resolution result based on this abstraction. Thus, the introduction of abstractions allows to explore formal inter-model relations based on consistency

    Beliefs and Dynamic Consistency,

    Get PDF
    In this chapter, we adopt the decision theoretic approach to the representation and updating of beliefs. We take up this issue and propose a reconsideration of Hammond's argument. After reviewing the argument more formally, we propose a weaker notion of dynamic consistency. We observe that this notion does not imply the full fledged sure thing principle thus leaving some room for models that are not based on expected utility maximization. However, these models still do not account for ''imprecision averse" behavior such as the one exhibited in Ellsberg experiment and that is captured by non-Bayesian models such as the multiple prior model. We therefore go on with the argument and establish that such non-Bayesian models possess the weak form of dynamic consistency when the information considered consists of a reduction in imprecision (in the Ellsberg example, some information about the proportion of Black and Yellow balls)R. Arena and A. Festré

    Consistency of the maximum likelihood estimator for general hidden Markov models

    Full text link
    Consider a parametrized family of general hidden Markov models, where both the observed and unobserved components take values in a complete separable metric space. We prove that the maximum likelihood estimator (MLE) of the parameter is strongly consistent under a rather minimal set of assumptions. As special cases of our main result, we obtain consistency in a large class of nonlinear state space models, as well as general results on linear Gaussian state space models and finite state models. A novel aspect of our approach is an information-theoretic technique for proving identifiability, which does not require an explicit representation for the relative entropy rate. Our method of proof could therefore form a foundation for the investigation of MLE consistency in more general dependent and non-Markovian time series. Also of independent interest is a general concentration inequality for VV-uniformly ergodic Markov chains.Comment: Published in at http://dx.doi.org/10.1214/10-AOS834 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Testing for Consistency using Artificial Regressions

    Get PDF
    We consider several issues related to what Hausman (1978) called "specification tests", namely tests designed to verify the consistency of parameter estimates. We first review a number of results about these tests in linear regression models, and present some new material on their distribution when the model being tested is false, and on a simple way to improve their power in certain cases. We then show how in a general nonlinear setting they may be computed as "score" tests by means of slightly modified versions of any artificial linear regression that can be used to calculate Lagrange multiplier tests, and explore some implications of this result. We show how to create a variant of the information matrix test to test for parameter consistency. We examine conventional tests and our new version in the context of binary choice models, and provide a simple way to compute both tests based on artificial regressions. Some Monte Carlo evidence suggests the most common form of information matrix test can be extremely badly behaved in samples of even quite large size.Durbin-Hausman tests; information matrix tests; binary choice models; Lagrange multiplier tests

    Semi-supervised cross-entropy clustering with information bottleneck constraint

    Full text link
    In this paper, we propose a semi-supervised clustering method, CEC-IB, that models data with a set of Gaussian distributions and that retrieves clusters based on a partial labeling provided by the user (partition-level side information). By combining the ideas from cross-entropy clustering (CEC) with those from the information bottleneck method (IB), our method trades between three conflicting goals: the accuracy with which the data set is modeled, the simplicity of the model, and the consistency of the clustering with side information. Experiments demonstrate that CEC-IB has a performance comparable to Gaussian mixture models (GMM) in a classical semi-supervised scenario, but is faster, more robust to noisy labels, automatically determines the optimal number of clusters, and performs well when not all classes are present in the side information. Moreover, in contrast to other semi-supervised models, it can be successfully applied in discovering natural subgroups if the partition-level side information is derived from the top levels of a hierarchical clustering
    corecore