3,294 research outputs found

    IMMANUEL WALLERSTEIN'S WORLD SYSTEM THEORY

    Get PDF
    World-systems analysis is not a theory, but an approach to social analysis and social change developed, among others by the Immanuel Wallerstein. Professor Wallerstein writes in three domains of world-systems analysis: the historical development of the modern world-system; the contemporary crisis of the capitalist world-economy; the structures of knowledge. The American anlyst rejects the notion of a "Third World", claiming there is only one world connected by a complex network of economic exchange relationship. Our world system is characterized by mechanisms which bring about a redistribution of resources from the periphery to the core. His analytical approach has made a significant impact and established an institutional base devoted to the general approach.World system, core, semi-periphery, periphery, external regions

    What Is a Macrostate? Subjective Observations and Objective Dynamics

    Get PDF
    We consider the question of whether thermodynamic macrostates are objective consequences of dynamics, or subjective reflections of our ignorance of a physical system. We argue that they are both; more specifically, that the set of macrostates forms the unique maximal partition of phase space which 1) is consistent with our observations (a subjective fact about our ability to observe the system) and 2) obeys a Markov process (an objective fact about the system's dynamics). We review the ideas of computational mechanics, an information-theoretic method for finding optimal causal models of stochastic processes, and argue that macrostates coincide with the ``causal states'' of computational mechanics. Defining a set of macrostates thus consists of an inductive process where we start with a given set of observables, and then refine our partition of phase space until we reach a set of states which predict their own future, i.e. which are Markovian. Macrostates arrived at in this way are provably optimal statistical predictors of the future values of our observables.Comment: 15 pages, no figure

    Predictive PAC Learning and Process Decompositions

    Full text link
    We informally call a stochastic process learnable if it admits a generalization error approaching zero in probability for any concept class with finite VC-dimension (IID processes are the simplest example). A mixture of learnable processes need not be learnable itself, and certainly its generalization error need not decay at the same rate. In this paper, we argue that it is natural in predictive PAC to condition not on the past observations but on the mixture component of the sample path. This definition not only matches what a realistic learner might demand, but also allows us to sidestep several otherwise grave problems in learning from dependent data. In particular, we give a novel PAC generalization bound for mixtures of learnable processes with a generalization error that is not worse than that of each mixture component. We also provide a characterization of mixtures of absolutely regular (Ξ²\beta-mixing) processes, of independent probability-theoretic interest.Comment: 9 pages, accepted in NIPS 201

    Consistency of Maximum Likelihood for Continuous-Space Network Models

    Full text link
    Network analysis needs tools to infer distributions over graphs of arbitrary size from a single graph. Assuming the distribution is generated by a continuous latent space model which obeys certain natural symmetry and smoothness properties, we establish three levels of consistency for non-parametric maximum likelihood inference as the number of nodes grows: (i) the estimated locations of all nodes converge in probability on their true locations; (ii) the distribution over locations in the latent space converges on the true distribution; and (iii) the distribution over graphs of arbitrary size converges.Comment: 21 page
    • …
    corecore