42,488 research outputs found

    Robustness and edge addition strategy of air transport networks : a case study of 'the Belt and Road'

    Get PDF
    Air transportation is of great importance in "the Belt and Road" (the B&R) region. The achievement of the B&R initiative relies on the availability, reliability, and safety of air transport infrastructure. A fundamental step is to find the critical elements in network performance. Considering the uneven distributions of population and economy, the current literature focusing on centrality measures in unweighted networks is not sufficient in the B&R region. By differentiating power and centrality in the B&R region, our analysis leads to two conclusions: (1) Deactivating powerful nodes causes a larger decrease in efficiency than deactivating central nodes. This indicates that powerful nodes in the B&R region are more critical than central nodes for network robustness. (2) Strategically adding edges between high powerful and low powerful nodes can enhance the network's ability to exchange resources efficiently. These findings can be used to adjust government policies for air transport configuration to achieve the best network performance and the most cost effective

    General-to-Specific Model Selection Procedures for Structural Vector Autoregressions

    Get PDF
    Structural vector autoregressive (SVAR) models have emerged as a dominant research strategy in empirical macroeconomics, but suffer from the large number of parameters employed and the resulting estimation uncertainty associated with their impulse responses. In this paper we propose general-to-specific model selection procedures to overcome these limitations. After showing that single-equation procedures are efficient for the reduction of the SVAR, but generally not for the reduction of its reduced form, the proposed reduction procedure is computer-automated using PcGets and its small-sample properties are evaluated in a realistic Monte Carlo experiment. The model selection procedure is shown to recover the DGP specification from a large unrestricted SVAR model with controlled size and power. The impulse responses generated by the selected SVAR are compared to those of the unrestricted and reduced VAR and found to be more precise and accurate. The proposed reduction strategy is then applied to the US monetary system considered by Christiano, Eichenbaum and Evans (1996). Although the selection process is hampered by the misspecification of the unrestricted VAR, the results are consistent with the Monte Carlo and question the validity of the impulses responses generated by the full system.Model selection, Impulse responses, Vector autoregression, Structural VAR, Causal order, Data mining

    Reducing "Structure From Motion": a General Framework for Dynamic Vision - Part 1: Modeling

    Get PDF
    The literature on recursive estimation of structure and motion from monocular image sequences comprises a large number of different models and estimation techniques. We propose a framework that allows us to derive and compare all models by following the idea of dynamical system reduction. The "natural" dynamic model, derived by the rigidity constraint and the perspective projection, is first reduced by explicitly decoupling structure (depth) from motion. Then implicit decoupling techniques are explored, which consist of imposing that some function of the unknown parameters is held constant. By appropriately choosing such a function, not only can we account for all models seen so far in the literature, but we can also derive novel ones

    Algorithmic Identification of Probabilities

    Full text link
    TThe problem is to identify a probability associated with a set of natural numbers, given an infinite data sequence of elements from the set. If the given sequence is drawn i.i.d. and the probability mass function involved (the target) belongs to a computably enumerable (c.e.) or co-computably enumerable (co-c.e.) set of computable probability mass functions, then there is an algorithm to almost surely identify the target in the limit. The technical tool is the strong law of large numbers. If the set is finite and the elements of the sequence are dependent while the sequence is typical in the sense of Martin-L\"of for at least one measure belonging to a c.e. or co-c.e. set of computable measures, then there is an algorithm to identify in the limit a computable measure for which the sequence is typical (there may be more than one such measure). The technical tool is the theory of Kolmogorov complexity. We give the algorithms and consider the associated predictions.Comment: 19 pages LaTeX.Corrected errors and rewrote the entire paper. arXiv admin note: text overlap with arXiv:1208.500

    Identifying the consequences of dynamic treatment strategies: A decision-theoretic overview

    Full text link
    We consider the problem of learning about and comparing the consequences of dynamic treatment strategies on the basis of observational data. We formulate this within a probabilistic decision-theoretic framework. Our approach is compared with related work by Robins and others: in particular, we show how Robins's 'G-computation' algorithm arises naturally from this decision-theoretic perspective. Careful attention is paid to the mathematical and substantive conditions required to justify the use of this formula. These conditions revolve around a property we term stability, which relates the probabilistic behaviours of observational and interventional regimes. We show how an assumption of 'sequential randomization' (or 'no unmeasured confounders'), or an alternative assumption of 'sequential irrelevance', can be used to infer stability. Probabilistic influence diagrams are used to simplify manipulations, and their power and limitations are discussed. We compare our approach with alternative formulations based on causal DAGs or potential response models. We aim to show that formulating the problem of assessing dynamic treatment strategies as a problem of decision analysis brings clarity, simplicity and generality.Comment: 49 pages, 15 figure

    A Theory of Formal Synthesis via Inductive Learning

    Full text link
    Formal synthesis is the process of generating a program satisfying a high-level formal specification. In recent times, effective formal synthesis methods have been proposed based on the use of inductive learning. We refer to this class of methods that learn programs from examples as formal inductive synthesis. In this paper, we present a theoretical framework for formal inductive synthesis. We discuss how formal inductive synthesis differs from traditional machine learning. We then describe oracle-guided inductive synthesis (OGIS), a framework that captures a family of synthesizers that operate by iteratively querying an oracle. An instance of OGIS that has had much practical impact is counterexample-guided inductive synthesis (CEGIS). We present a theoretical characterization of CEGIS for learning any program that computes a recursive language. In particular, we analyze the relative power of CEGIS variants where the types of counterexamples generated by the oracle varies. We also consider the impact of bounded versus unbounded memory available to the learning algorithm. In the special case where the universe of candidate programs is finite, we relate the speed of convergence to the notion of teaching dimension studied in machine learning theory. Altogether, the results of the paper take a first step towards a theoretical foundation for the emerging field of formal inductive synthesis

    Inductive Definition and Domain Theoretic Properties of Fully Abstract

    Full text link
    A construction of fully abstract typed models for PCF and PCF^+ (i.e., PCF + "parallel conditional function"), respectively, is presented. It is based on general notions of sequential computational strategies and wittingly consistent non-deterministic strategies introduced by the author in the seventies. Although these notions of strategies are old, the definition of the fully abstract models is new, in that it is given level-by-level in the finite type hierarchy. To prove full abstraction and non-dcpo domain theoretic properties of these models, a theory of computational strategies is developed. This is also an alternative and, in a sense, an analogue to the later game strategy semantics approaches of Abramsky, Jagadeesan, and Malacaria; Hyland and Ong; and Nickau. In both cases of PCF and PCF^+ there are definable universal (surjective) functionals from numerical functions to any given type, respectively, which also makes each of these models unique up to isomorphism. Although such models are non-omega-complete and therefore not continuous in the traditional terminology, they are also proved to be sequentially complete (a weakened form of omega-completeness), "naturally" continuous (with respect to existing directed "pointwise", or "natural" lubs) and also "naturally" omega-algebraic and "naturally" bounded complete -- appropriate generalisation of the ordinary notions of domain theory to the case of non-dcpos.Comment: 50 page
    • ā€¦
    corecore