186,320 research outputs found

    Measuring and Optimizing Cultural Markets

    Full text link
    Social influence has been shown to create significant unpredictability in cultural markets, providing one potential explanation why experts routinely fail at predicting commercial success of cultural products. To counteract the difficulty of making accurate predictions, "measure and react" strategies have been advocated but finding a concrete strategy that scales for very large markets has remained elusive so far. Here we propose a "measure and optimize" strategy based on an optimization policy that uses product quality, appeal, and social influence to maximize expected profits in the market at each decision point. Our computational experiments show that our policy leverages social influence to produce significant performance benefits for the market, while our theoretical analysis proves that our policy outperforms in expectation any policy not displaying social information. Our results contrast with earlier work which focused on showing the unpredictability and inequalities created by social influence. Not only do we show for the first time that dynamically showing consumers positive social information under our policy increases the expected performance of the seller in cultural markets. We also show that, in reasonable settings, our policy does not introduce significant unpredictability and identifies "blockbusters". Overall, these results shed new light on the nature of social influence and how it can be leveraged for the benefits of the market

    Identifying spatial invasion of pandemics on metapopulation networks via anatomizing arrival history

    Get PDF
    Spatial spread of infectious diseases among populations via the mobility of humans is highly stochastic and heterogeneous. Accurate forecast/mining of the spread process is often hard to be achieved by using statistical or mechanical models. Here we propose a new reverse problem, which aims to identify the stochastically spatial spread process itself from observable information regarding the arrival history of infectious cases in each subpopulation. We solved the problem by developing an efficient optimization algorithm based on dynamical programming, which comprises three procedures: i, anatomizing the whole spread process among all subpopulations into disjoint componential patches; ii, inferring the most probable invasion pathways underlying each patch via maximum likelihood estimation; iii, recovering the whole process by assembling the invasion pathways in each patch iteratively, without burdens in parameter calibrations and computer simulations. Based on the entropy theory, we introduced an identifiability measure to assess the difficulty level that an invasion pathway can be identified. Results on both artificial and empirical metapopulation networks show the robust performance in identifying actual invasion pathways driving pandemic spread.Comment: 14pages, 8 figures; Accepted by IEEE Transactions on Cybernetic

    Majorizing measures for the optimizer

    Get PDF
    The theory of majorizing measures, extensively developed by Fernique, Talagrand and many others, provides one of the most general frameworks for controlling the behavior of stochastic processes. In particular, it can be applied to derive quantitative bounds on the expected suprema and the degree of continuity of sample paths for many processes. One of the crowning achievements of the theory is Talagrand’s tight alternative characterization of the suprema of Gaussian processes in terms of majorizing measures. The proof of this theorem was difficult, and thus considerable effort was put into the task of developing both shorter and easier to understand proofs. A major reason for this difficulty was considered to be theory of majorizing measures itself, which had the reputation of being opaque and mysterious. As a consequence, most recent treatments of the theory (including by Talagrand himself) have eschewed the use of majorizing measures in favor of a purely combinatorial approach (the generic chaining) where objects based on sequences of partitions provide roughly matching upper and lower bounds on the desired expected supremum. In this paper, we return to majorizing measures as a primary object of study, and give a viewpoint that we think is natural and clarifying from an optimization perspective. As our main contribution, we give an algorithmic proof of the majorizing measures theorem based on two parts: We make the simple (but apparently new) observation that finding the best majorizing measure can be cast as a convex program. This also allows for efficiently computing the measure using off-the-shelf methods from convex optimization. We obtain tree-based upper and lower bound certificates by rounding, in a series of steps, the primal and dual solutions to this convex program. While duality has conceptually been part of the theory since its beginnings, as far as we are aware no explicit link to convex optimization has been previously made

    Linking Search Space Structure, Run-Time Dynamics, and Problem Difficulty: A Step Toward Demystifying Tabu Search

    Full text link
    Tabu search is one of the most effective heuristics for locating high-quality solutions to a diverse array of NP-hard combinatorial optimization problems. Despite the widespread success of tabu search, researchers have a poor understanding of many key theoretical aspects of this algorithm, including models of the high-level run-time dynamics and identification of those search space features that influence problem difficulty. We consider these questions in the context of the job-shop scheduling problem (JSP), a domain where tabu search algorithms have been shown to be remarkably effective. Previously, we demonstrated that the mean distance between random local optima and the nearest optimal solution is highly correlated with problem difficulty for a well-known tabu search algorithm for the JSP introduced by Taillard. In this paper, we discuss various shortcomings of this measure and develop a new model of problem difficulty that corrects these deficiencies. We show that Taillards algorithm can be modeled with high fidelity as a simple variant of a straightforward random walk. The random walk model accounts for nearly all of the variability in the cost required to locate both optimal and sub-optimal solutions to random JSPs, and provides an explanation for differences in the difficulty of random versus structured JSPs. Finally, we discuss and empirically substantiate two novel predictions regarding tabu search algorithm behavior. First, the method for constructing the initial solution is highly unlikely to impact the performance of tabu search. Second, tabu tenure should be selected to be as small as possible while simultaneously avoiding search stagnation; values larger than necessary lead to significant degradations in performance

    Bayesian Inference using the Proximal Mapping: Uncertainty Quantification under Varying Dimensionality

    Full text link
    In statistical applications, it is common to encounter parameters supported on a varying or unknown dimensional space. Examples include the fused lasso regression, the matrix recovery under an unknown low rank, etc. Despite the ease of obtaining a point estimate via the optimization, it is much more challenging to quantify their uncertainty -- in the Bayesian framework, a major difficulty is that if assigning the prior associated with a pp-dimensional measure, then there is zero posterior probability on any lower-dimensional subset with dimension d<pd<p; to avoid this caveat, one needs to choose another dimension-selection prior on dd, which often involves a highly combinatorial problem. To significantly reduce the modeling burden, we propose a new generative process for the prior: starting from a continuous random variable such as multivariate Gaussian, we transform it into a varying-dimensional space using the proximal mapping. This leads to a large class of new Bayesian models that can directly exploit the popular frequentist regularizations and their algorithms, such as the nuclear norm penalty and the alternating direction method of multipliers, while providing a principled and probabilistic uncertainty estimation. We show that this framework is well justified in the geometric measure theory, and enjoys a convenient posterior computation via the standard Hamiltonian Monte Carlo. We demonstrate its use in the analysis of the dynamic flow network data.Comment: 26 pages, 4 figure
    • …
    corecore