7,892 research outputs found

    The growth of bilateralism

    Get PDF
    One of the most notable international economic events over the past 20 years has been the proliferation of bilateral free trade agreements (FTAs). Bilateral agreements account for 80 percent of all agreements notified to the WTO, 94 percent of those signed or under negotiation, and currently 100 percent of those at the proposal stage. Some have argued that the growth of bilateralism is attributable to governments having pursued a policy of “competitive liberalization" - implementing bilateral FTAs to offset potential trade diversion caused by FTAs of “third-country-pairs" - but the growth of bilateralism can also be attributed potentially to “tariff complementarity" - the incentive for FTA members to reduce their external tariffs on nonmembers. Guided by new comparative statics from the numerical general equilibrium monopolistic competition model of FTA economic determinants in Baier and Bergstrand (2004), we augment their parsimonious logit (and probit) model of the economic determinants of bilateral FTAs to incorporate theory-motivated indexes to examine the influence of existing memberships on subsequent FTA formations. The model can predict correctly 90 percent of the bilateral FTAs within five years of their formation, while still predicting “No-FTA" correctly in 90 percent of the observations when no FTA exists, using a sample of over 350,000 observations for pairings of 146 countries from 1960-2005. Even imposing the higher correct prediction rate of “No-FTA" of 97 percent in Baier and Bergstrand (2004), the parsimonious model still predicts correctly 75 percent of these rare FTA events; only 3 percent of the observations reflect a country-pair having an FTA in any year. The results suggest that - while evidence supports that “competitive liberalization" is a force for bilateralism - the effect on the likelihood a pair of countries forming an FTA of the pair's own FTAs with other countries (i.e., tariff complementarity) is likely just as important as the effect of third-country-pairs' FTAs (i.e., competitive liberalization) for the growth of bilateralism

    Learning From Labeled And Unlabeled Data: An Empirical Study Across Techniques And Domains

    Full text link
    There has been increased interest in devising learning techniques that combine unlabeled data with labeled data ? i.e. semi-supervised learning. However, to the best of our knowledge, no study has been performed across various techniques and different types and amounts of labeled and unlabeled data. Moreover, most of the published work on semi-supervised learning techniques assumes that the labeled and unlabeled data come from the same distribution. It is possible for the labeling process to be associated with a selection bias such that the distributions of data points in the labeled and unlabeled sets are different. Not correcting for such bias can result in biased function approximation with potentially poor performance. In this paper, we present an empirical study of various semi-supervised learning techniques on a variety of datasets. We attempt to answer various questions such as the effect of independence or relevance amongst features, the effect of the size of the labeled and unlabeled sets and the effect of noise. We also investigate the impact of sample-selection bias on the semi-supervised learning techniques under study and implement a bivariate probit technique particularly designed to correct for such bias

    DSL: Discriminative Subgraph Learning via Sparse Self-Representation

    Full text link
    The goal in network state prediction (NSP) is to classify the global state (label) associated with features embedded in a graph. This graph structure encoding feature relationships is the key distinctive aspect of NSP compared to classical supervised learning. NSP arises in various applications: gene expression samples embedded in a protein-protein interaction (PPI) network, temporal snapshots of infrastructure or sensor networks, and fMRI coherence network samples from multiple subjects to name a few. Instances from these domains are typically ``wide'' (more features than samples), and thus, feature sub-selection is required for robust and generalizable prediction. How to best employ the network structure in order to learn succinct connected subgraphs encompassing the most discriminative features becomes a central challenge in NSP. Prior work employs connected subgraph sampling or graph smoothing within optimization frameworks, resulting in either large variance of quality or weak control over the connectivity of selected subgraphs. In this work we propose an optimization framework for discriminative subgraph learning (DSL) which simultaneously enforces (i) sparsity, (ii) connectivity and (iii) high discriminative power of the resulting subgraphs of features. Our optimization algorithm is a single-step solution for the NSP and the associated feature selection problem. It is rooted in the rich literature on maximal-margin optimization, spectral graph methods and sparse subspace self-representation. DSL simultaneously ensures solution interpretability and superior predictive power (up to 16% improvement in challenging instances compared to baselines), with execution times up to an hour for large instances.Comment: 9 page
    corecore