443 research outputs found

    From Black-Scholes to Online Learning: Dynamic Hedging under Adversarial Environments

    Full text link
    We consider a non-stochastic online learning approach to price financial options by modeling the market dynamic as a repeated game between the nature (adversary) and the investor. We demonstrate that such framework yields analogous structure as the Black-Scholes model, the widely popular option pricing model in stochastic finance, for both European and American options with convex payoffs. In the case of non-convex options, we construct approximate pricing algorithms, and demonstrate that their efficiency can be analyzed through the introduction of an artificial probability measure, in parallel to the so-called risk-neutral measure in the finance literature, even though our framework is completely adversarial. Continuous-time convergence results and extensions to incorporate price jumps are also presented

    Technology diffusion in communication networks

    Full text link
    The deployment of new technologies in the Internet is notoriously difficult, as evidence by the myriad of well-developed networking technologies that still have not seen widespread adoption (e.g., secure routing, IPv6, etc.) A key hurdle is the fact that the Internet lacks a centralized authority that can mandate the deployment of a new technology. Instead, the Internet consists of thousands of nodes, each controlled by an autonomous, profit-seeking firm, that will deploy a new networking technology only if it obtains sufficient local utility by doing so. For the technologies we study here, local utility depends on the set of nodes that can be reached by traversing paths consisting only of nodes that have already deployed the new technology. To understand technology diffusion in the Internet, we propose a new model inspired by work on the spread of influence in social networks. Unlike traditional models, where a node's utility depends only its immediate neighbors, in our model, a node can be influenced by the actions of remote nodes. Specifically, we assume node v activates (i.e. deploys the new technology) when it is adjacent to a sufficiently large connected component in the subgraph induced by the set of active nodes; namely, of size exceeding node v's threshold value \theta(v). We are interested in the problem of choosing the right seedset of nodes to activate initially, so that the rest of the nodes in the network have sufficient local utility to follow suit. We take the graph and thresholds values as input to our problem. We show that our problem is both NP-hard and does not admit an (1-o(1) ln|V| approximation on general graphs. Then, we restrict our study to technology diffusion problems where (a) maximum distance between any pair of nodes in the graph is r, and (b) there are at most \ell possible threshold values. Our set of restrictions is quite natural, given that (a) the Internet graph has constant diameter, and (b) the fact that limiting the granularity of the threshold values makes sense given the difficulty in obtaining empirical data that parameterizes deployment costs and benefits. We present algorithm that obtains a solution with guaranteed approximation rate of O(r^2 \ell \log|V|) which is asymptotically optimal, given our hardness results. Our approximation algorithm is a linear-programming relaxation of an 0-1 integer program along with a novel randomized rounding scheme.National Science Foundation (S-1017907, CCF-0915922

    From which world is your graph?

    Full text link
    Discovering statistical structure from links is a fundamental problem in the analysis of social networks. Choosing a misspecified model, or equivalently, an incorrect inference algorithm will result in an invalid analysis or even falsely uncover patterns that are in fact artifacts of the model. This work focuses on unifying two of the most widely used link-formation models: the stochastic blockmodel (SBM) and the small world (or latent space) model (SWM). Integrating techniques from kernel learning, spectral graph theory, and nonlinear dimensionality reduction, we develop the first statistically sound polynomial-time algorithm to discover latent patterns in sparse graphs for both models. When the network comes from an SBM, the algorithm outputs a block structure. When it is from an SWM, the algorithm outputs estimates of each node's latent position.Comment: To appear in NIPS 201

    AMS Without 4-Wise Independence on Product Domains

    Get PDF
    In their seminal work, Alon, Matias, and Szegedy introduced several sketching techniques, including showing that 4-wise independence is sufficient to obtain good approximations of the second frequency moment. In this work, we show that their sketching technique can be extended to product domains [n]k[n]^k by using the product of 4-wise independent functions on [n][n]. Our work extends that of Indyk and McGregor, who showed the result for k=2k = 2. Their primary motivation was the problem of identifying correlations in data streams. In their model, a stream of pairs (i,j)∈[n]2(i,j) \in [n]^2 arrive, giving a joint distribution (X,Y)(X,Y), and they find approximation algorithms for how close the joint distribution is to the product of the marginal distributions under various metrics, which naturally corresponds to how close XX and YY are to being independent. By using our technique, we obtain a new result for the problem of approximating the ℓ2\ell_2 distance between the joint distribution and the product of the marginal distributions for kk-ary vectors, instead of just pairs, in a single pass. Our analysis gives a randomized algorithm that is a (1±ϵ)(1 \pm \epsilon) approximation (with probability 1−δ1-\delta) that requires space logarithmic in nn and mm and proportional to 3k3^k

    Adaptive Reduced Rank Regression

    Full text link
    We study the low rank regression problem y=Mx+ϵ\mathbf{y} = M\mathbf{x} + \epsilon, where x\mathbf{x} and y\mathbf{y} are d1d_1 and d2d_2 dimensional vectors respectively. We consider the extreme high-dimensional setting where the number of observations nn is less than d1+d2d_1 + d_2. Existing algorithms are designed for settings where nn is typically as large as rank(M)(d1+d2)\mathrm{rank}(M)(d_1+d_2). This work provides an efficient algorithm which only involves two SVD, and establishes statistical guarantees on its performance. The algorithm decouples the problem by first estimating the precision matrix of the features, and then solving the matrix denoising problem. To complement the upper bound, we introduce new techniques for establishing lower bounds on the performance of any algorithm for this problem. Our preliminary experiments confirm that our algorithm often out-performs existing baselines, and is always at least competitive.Comment: 40 page
    • …
    corecore