53,052 research outputs found

    The Geometry of Scheduling

    Full text link
    We consider the following general scheduling problem: The input consists of n jobs, each with an arbitrary release time, size, and a monotone function specifying the cost incurred when the job is completed at a particular time. The objective is to find a preemptive schedule of minimum aggregate cost. This problem formulation is general enough to include many natural scheduling objectives, such as weighted flow, weighted tardiness, and sum of flow squared. Our main result is a randomized polynomial-time algorithm with an approximation ratio O(log log nP), where P is the maximum job size. We also give an O(1) approximation in the special case when all jobs have identical release times. The main idea is to reduce this scheduling problem to a particular geometric set-cover problem which is then solved using the local ratio technique and Varadarajan's quasi-uniform sampling technique. This general algorithmic approach improves the best known approximation ratios by at least an exponential factor (and much more in some cases) for essentially all of the nontrivial common special cases of this problem. Our geometric interpretation of scheduling may be of independent interest.Comment: Conference version in FOCS 201

    On the Structure, Covering, and Learning of Poisson Multinomial Distributions

    Full text link
    An (n,k)(n,k)-Poisson Multinomial Distribution (PMD) is the distribution of the sum of nn independent random vectors supported on the set Bk={e1,,ek}{\cal B}_k=\{e_1,\ldots,e_k\} of standard basis vectors in Rk\mathbb{R}^k. We prove a structural characterization of these distributions, showing that, for all ε>0\varepsilon >0, any (n,k)(n, k)-Poisson multinomial random vector is ε\varepsilon-close, in total variation distance, to the sum of a discretized multidimensional Gaussian and an independent (poly(k/ε),k)(\text{poly}(k/\varepsilon), k)-Poisson multinomial random vector. Our structural characterization extends the multi-dimensional CLT of Valiant and Valiant, by simultaneously applying to all approximation requirements ε\varepsilon. In particular, it overcomes factors depending on logn\log n and, importantly, the minimum eigenvalue of the PMD's covariance matrix from the distance to a multidimensional Gaussian random variable. We use our structural characterization to obtain an ε\varepsilon-cover, in total variation distance, of the set of all (n,k)(n, k)-PMDs, significantly improving the cover size of Daskalakis and Papadimitriou, and obtaining the same qualitative dependence of the cover size on nn and ε\varepsilon as the k=2k=2 cover of Daskalakis and Papadimitriou. We further exploit this structure to show that (n,k)(n,k)-PMDs can be learned to within ε\varepsilon in total variation distance from O~k(1/ε2)\tilde{O}_k(1/\varepsilon^2) samples, which is near-optimal in terms of dependence on ε\varepsilon and independent of nn. In particular, our result generalizes the single-dimensional result of Daskalakis, Diakonikolas, and Servedio for Poisson Binomials to arbitrary dimension.Comment: 49 pages, extended abstract appeared in FOCS 201

    Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions

    Full text link
    We investigate three related and important problems connected to machine learning: approximating a submodular function everywhere, learning a submodular function (in a PAC-like setting [53]), and constrained minimization of submodular functions. We show that the complexity of all three problems depends on the 'curvature' of the submodular function, and provide lower and upper bounds that refine and improve previous results [3, 16, 18, 52]. Our proof techniques are fairly generic. We either use a black-box transformation of the function (for approximation and learning), or a transformation of algorithms to use an appropriate surrogate function (for minimization). Curiously, curvature has been known to influence approximations for submodular maximization [7, 55], but its effect on minimization, approximation and learning has hitherto been open. We complete this picture, and also support our theoretical claims by empirical results.Comment: 21 pages. A shorter version appeared in Advances of NIPS-201

    Detection of variable frequency signals using a fast chirp transform

    Get PDF
    The detection of signals with varying frequency is important in many areas of physics and astrophysics. The current work was motivated by a desire to detect gravitational waves from the binary inspiral of neutron stars and black holes, a topic of significant interest for the new generation of interferometric gravitational wave detectors such as LIGO. However, this work has significant generality beyond gravitational wave signal detection. We define a Fast Chirp Transform (FCT) analogous to the Fast Fourier Transform (FFT). Use of the FCT provides a simple and powerful formalism for detection of signals with variable frequency just as Fourier transform techniques provide a formalism for the detection of signals of constant frequency. In particular, use of the FCT can alleviate the requirement of generating complicated families of filter functions typically required in the conventional matched filtering process. We briefly discuss the application of the FCT to several signal detection problems of current interest
    corecore