8,742 research outputs found

    Approximation Algorithms for Route Planning with Nonlinear Objectives

    Full text link
    We consider optimal route planning when the objective function is a general nonlinear and non-monotonic function. Such an objective models user behavior more accurately, for example, when a user is risk-averse, or the utility function needs to capture a penalty for early arrival. It is known that as nonlinearity arises, the problem becomes NP-hard and little is known about computing optimal solutions when in addition there is no monotonicity guarantee. We show that an approximately optimal non-simple path can be efficiently computed under some natural constraints. In particular, we provide a fully polynomial approximation scheme under hop constraints. Our approximation algorithm can extend to run in pseudo-polynomial time under a more general linear constraint that sometimes is useful. As a by-product, we show that our algorithm can be applied to the problem of finding a path that is most likely to be on time for a given deadline.Comment: 9 pages, 2 figures, main part of this paper is to be appear in AAAI'1

    Energy Efficiency in Cache Enabled Small Cell Networks With Adaptive User Clustering

    Full text link
    Using a network of cache enabled small cells, traffic during peak hours can be reduced considerably through proactively fetching the content that is most probable to be requested. In this paper, we aim at exploring the impact of proactive caching on an important metric for future generation networks, namely, energy efficiency (EE). We argue that, exploiting the correlation in user content popularity profiles in addition to the spatial repartitions of users with comparable request patterns, can result in considerably improving the achievable energy efficiency of the network. In this paper, the problem of optimizing EE is decoupled into two related subproblems. The first one addresses the issue of content popularity modeling. While most existing works assume similar popularity profiles for all users in the network, we consider an alternative caching framework in which, users are clustered according to their content popularity profiles. In order to showcase the utility of the proposed clustering scheme, we use a statistical model selection criterion, namely Akaike information criterion (AIC). Using stochastic geometry, we derive a closed-form expression of the achievable EE and we find the optimal active small cell density vector that maximizes it. The second subproblem investigates the impact of exploiting the spatial repartitions of users with comparable request patterns. After considering a snapshot of the network, we formulate a combinatorial optimization problem that enables to optimize content placement such that the used transmission power is minimized. Numerical results show that the clustering scheme enable to considerably improve the cache hit probability and consequently the EE compared with an unclustered approach. Simulations also show that the small base station allocation algorithm results in improving the energy efficiency and hit probability.Comment: 30 pages, 5 figures, submitted to Transactions on Wireless Communications (15-Dec-2016

    Coordinated Multicasting with Opportunistic User Selection in Multicell Wireless Systems

    Full text link
    Physical layer multicasting with opportunistic user selection (OUS) is examined for multicell multi-antenna wireless systems. By adopting a two-layer encoding scheme, a rate-adaptive channel code is applied in each fading block to enable successful decoding by a chosen subset of users (which varies over different blocks) and an application layer erasure code is employed across multiple blocks to ensure that every user is able to recover the message after decoding successfully in a sufficient number of blocks. The transmit signal and code-rate in each block determine opportunistically the subset of users that are able to successfully decode and can be chosen to maximize the long-term multicast efficiency. The employment of OUS not only helps avoid rate-limitations caused by the user with the worst channel, but also helps coordinate interference among different cells and multicast groups. In this work, efficient algorithms are proposed for the design of the transmit covariance matrices, the physical layer code-rates, and the target user subsets in each block. In the single group scenario, the system parameters are determined by maximizing the group-rate, defined as the physical layer code-rate times the fraction of users that can successfully decode in each block. In the multi-group scenario, the system parameters are determined by considering a group-rate balancing optimization problem, which is solved by a successive convex approximation (SCA) approach. To further reduce the feedback overhead, we also consider the case where only part of the users feed back their channel vectors in each block and propose a design based on the balancing of the expected group-rates. In addition to SCA, a sample average approximation technique is also introduced to handle the probabilistic terms arising in this problem. The effectiveness of the proposed schemes is demonstrated by computer simulations.Comment: Accepted by IEEE Transactions on Signal Processin

    Budget Feasible Mechanisms for Experimental Design

    Full text link
    In the classical experimental design setting, an experimenter E has access to a population of nn potential experiment subjects i{1,...,n}i\in \{1,...,n\}, each associated with a vector of features xiRdx_i\in R^d. Conducting an experiment with subject ii reveals an unknown value yiRy_i\in R to E. E typically assumes some hypothetical relationship between xix_i's and yiy_i's, e.g., yiβxiy_i \approx \beta x_i, and estimates β\beta from experiments, e.g., through linear regression. As a proxy for various practical constraints, E may select only a subset of subjects on which to conduct the experiment. We initiate the study of budgeted mechanisms for experimental design. In this setting, E has a budget BB. Each subject ii declares an associated cost ci>0c_i >0 to be part of the experiment, and must be paid at least her cost. In particular, the Experimental Design Problem (EDP) is to find a set SS of subjects for the experiment that maximizes V(S) = \log\det(I_d+\sum_{i\in S}x_i\T{x_i}) under the constraint iSciB\sum_{i\in S}c_i\leq B; our objective function corresponds to the information gain in parameter β\beta that is learned through linear regression methods, and is related to the so-called DD-optimality criterion. Further, the subjects are strategic and may lie about their costs. We present a deterministic, polynomial time, budget feasible mechanism scheme, that is approximately truthful and yields a constant factor approximation to EDP. In particular, for any small δ>0\delta > 0 and ϵ>0\epsilon > 0, we can construct a (12.98, ϵ\epsilon)-approximate mechanism that is δ\delta-truthful and runs in polynomial time in both nn and loglogBϵδ\log\log\frac{B}{\epsilon\delta}. We also establish that no truthful, budget-feasible algorithms is possible within a factor 2 approximation, and show how to generalize our approach to a wide class of learning problems, beyond linear regression

    Algorithmic and Statistical Perspectives on Large-Scale Data Analysis

    Full text link
    In recent years, ideas from statistics and scientific computing have begun to interact in increasingly sophisticated and fruitful ways with ideas from computer science and the theory of algorithms to aid in the development of improved worst-case algorithms that are useful for large-scale scientific and Internet data analysis problems. In this chapter, I will describe two recent examples---one having to do with selecting good columns or features from a (DNA Single Nucleotide Polymorphism) data matrix, and the other having to do with selecting good clusters or communities from a data graph (representing a social or information network)---that drew on ideas from both areas and that may serve as a model for exploiting complementary algorithmic and statistical perspectives in order to solve applied large-scale data analysis problems.Comment: 33 pages. To appear in Uwe Naumann and Olaf Schenk, editors, "Combinatorial Scientific Computing," Chapman and Hall/CRC Press, 201

    Chance-Constrained Outage Scheduling using a Machine Learning Proxy

    Full text link
    Outage scheduling aims at defining, over a horizon of several months to years, when different components needing maintenance should be taken out of operation. Its objective is to minimize operation-cost expectation while satisfying reliability-related constraints. We propose a distributed scenario-based chance-constrained optimization formulation for this problem. To tackle tractability issues arising in large networks, we use machine learning to build a proxy for predicting outcomes of power system operation processes in this context. On the IEEE-RTS79 and IEEE-RTS96 networks, our solution obtains cheaper and more reliable plans than other candidates

    From Sparse Signals to Sparse Residuals for Robust Sensing

    Full text link
    One of the key challenges in sensor networks is the extraction of information by fusing data from a multitude of distinct, but possibly unreliable sensors. Recovering information from the maximum number of dependable sensors while specifying the unreliable ones is critical for robust sensing. This sensing task is formulated here as that of finding the maximum number of feasible subsystems of linear equations, and proved to be NP-hard. Useful links are established with compressive sampling, which aims at recovering vectors that are sparse. In contrast, the signals here are not sparse, but give rise to sparse residuals. Capitalizing on this form of sparsity, four sensing schemes with complementary strengths are developed. The first scheme is a convex relaxation of the original problem expressed as a second-order cone program (SOCP). It is shown that when the involved sensing matrices are Gaussian and the reliable measurements are sufficiently many, the SOCP can recover the optimal solution with overwhelming probability. The second scheme is obtained by replacing the initial objective function with a concave one. The third and fourth schemes are tailored for noisy sensor data. The noisy case is cast as a combinatorial problem that is subsequently surrogated by a (weighted) SOCP. Interestingly, the derived cost functions fall into the framework of robust multivariate linear regression, while an efficient block-coordinate descent algorithm is developed for their minimization. The robust sensing capabilities of all schemes are verified by simulated tests.Comment: Under review for publication in the IEEE Transactions on Signal Processing (revised version

    A Randomized Greedy Algorithm for Near-Optimal Sensor Scheduling in Large-Scale Sensor Networks

    Full text link
    We study the problem of scheduling sensors in a resource-constrained linear dynamical system, where the objective is to select a small subset of sensors from a large network to perform the state estimation task. We formulate this problem as the maximization of a monotone set function under a matroid constraint. We propose a randomized greedy algorithm that is significantly faster than state-of-the-art methods. By introducing the notion of curvature which quantifies how close a function is to being submodular, we analyze the performance of the proposed algorithm and find a bound on the expected mean square error (MSE) of the estimator that uses the selected sensors in terms of the optimal MSE. Moreover, we derive a probabilistic bound on the curvature for the scenario where{\color{black}{ the measurements are i.i.d. random vectors with bounded 2\ell_2 norm.}} Simulation results demonstrate efficacy of the randomized greedy algorithm in a comparison with greedy and semidefinite programming relaxation methods
    corecore