32,114 research outputs found

    Geometric Facility Location Problems on Uncertain Data

    Get PDF
    Facility location, as an important topic in computer science and operations research, is concerned with placing facilities for serving demand points (each representing a customer) to minimize the (service) cost. In the real world, data is often associated with uncertainty because of measurement inaccuracy, sampling discrepancy, outdated data sources, resource limitation, etc. Hence, problems on uncertain data have attracted much attention. In this dissertation, we mainly study a classical facility location problem: the k- center problem and several of its variations, on uncertain points each of which has multiple locations that follow a probability density function (pdf). We develop efficient algorithms for solving these problems. Since these problems more or less have certain geometric flavor, computational geometry techniques are utilized to help develop the algorithms. In particular, we first study the k-center problem on uncertain points on a line, which is aimed to find k centers on the line to minimize the maximum expected distance from all uncertain points to their expected closest centers. We develop efficient algorithms for both the continuous case where the location of every uncertain point follows a continuous piecewise-uniform pdf and the discrete case where each uncertain point has multiple discrete locations each associated with a probability. The time complexities of our algorithms are nearly linear and match those for the same problem on deterministic points. Then, we consider the one-center problem (i.e., k= 1) on a tree, where each uncertain point has multiple locations in the tree and we want to compute a center in the tree to minimize the maximum expected distance from it to all uncertain points. We solve the problem in linear time by proposing a new algorithmic scheme, called the refined prune-and-search. Next, we consider the one-dimensional one-center problem of uncertain points with continuous pdfs, and the one-center problem in the plane under the rectilinear metric for uncertain points with discrete locations. We solve both problems in linear time, again by using the refined prune-and-search technique. In addition, we study the k-center problem on uncertain points in a tree. We present an efficient algorithm for the problem by proposing a new tree decomposition and developing several data structures. The tree decomposition and these data structures may be interesting in their own right. Finally, we consider the line-constrained k-center problem on deterministic points in the plane where the centers are required to be located on a given line. Several distance metrics including L1, L2, and L1 are considered. We also study the line-constrained k-median and k-means problems in the plane. These problems have been studied before. Based on geometric observations, we design new algorithms that improve the previous work. The algorithms and techniques we developed in this dissertation may and other applications as well, in particular, on solving other related problems on uncertain data

    Sensor Deployment for Network-like Environments

    Full text link
    This paper considers the problem of optimally deploying omnidirectional sensors, with potentially limited sensing radius, in a network-like environment. This model provides a compact and effective description of complex environments as well as a proper representation of road or river networks. We present a two-step procedure based on a discrete-time gradient ascent algorithm to find a local optimum for this problem. The first step performs a coarse optimization where sensors are allowed to move in the plane, to vary their sensing radius and to make use of a reduced model of the environment called collapsed network. It is made up of a finite discrete set of points, barycenters, produced by collapsing network edges. Sensors can be also clustered to reduce the complexity of this phase. The sensors' positions found in the first step are then projected on the network and used in the second finer optimization, where sensors are constrained to move only on the network. The second step can be performed on-line, in a distributed fashion, by sensors moving in the real environment, and can make use of the full network as well as of the collapsed one. The adoption of a less constrained initial optimization has the merit of reducing the negative impact of the presence of a large number of local optima. The effectiveness of the presented procedure is illustrated by a simulated deployment problem in an airport environment

    Comparison of an X-ray selected sample of massive lensing clusters with the MareNostrum Universe LCDM simulation

    Full text link
    A long-standing problem of strong lensing by galaxy clusters regards the observed high rate of giant gravitational arcs as compared to the predictions in the framework of the "standard" cosmological model. Recently, few other inconsistencies between theoretical expectations and observations have been claimed which regard the large size of the Einstein rings and the high concentrations of few clusters with strong lensing features. All of these problems consistently indicate that observed galaxy clusters may be gravitational lenses stronger than expected. We use clusters extracted from the MareNostrum Universe to build up mock catalogs of galaxy clusters selected through their X-ray flux. We use these objects to estimate the probability distributions of lensing cross sections, Einstein rings, and concentrations for the sample of 12 MACS clusters at z>0.5z>0.5 presented in Ebeling et al. (2007) and discussed in Zitrin et al. (2010). We find that simulated clusters produce 50\sim 50% less arcs than observed clusters do. The medians of the distributions of the Einstein ring sizes differ by 25\sim 25% between simulations and observations. We estimate that, due to cluster triaxiality and orientation biases affecting the lenses with the largest cross sections, the concentrations of the individual MACS clusters inferred from the lensing analysis should be up to a factor of 2\sim 2 larger than expected from the Λ\LambdaCDM model. The arc statistics, the Einstein ring, and the concentration problems in strong lensing clusters are mitigated but not solved on the basis of our analysis. Nevertheless, due to the lack of redshifts for most of the multiple image systems used for modeling the MACS clusters, the results of this work will need to be verified with additional data. The upcoming CLASH program will provide an ideal sample for extending our comparison (abridged).Comment: 11 pages, 9 figures, accepted for publication on A&

    Shrinkage and Variable Selection by Polytopes

    Get PDF
    Constrained estimators that enforce variable selection and grouping of highly correlated data have been shown to be successful in finding sparse representations and obtaining good performance in prediction. We consider polytopes as a general class of compact and convex constraint regions. Well established procedures like LASSO (Tibshirani, 1996) or OSCAR (Bondell and Reich, 2008) are shown to be based on specific subclasses of polytopes. The general framework of polytopes can be used to investigate the geometric structure that underlies these procedures. Moreover, we propose a specifically designed class of polytopes that enforces variable selection and grouping. Simulation studies and an application illustrate the usefulness of the proposed method

    Dust SEDs in the era of Herschel and Planck: a Hierarchical Bayesian fitting technique

    Full text link
    We present a hierarchical Bayesian method for fitting infrared spectral energy distributions (SEDs) of dust emission to observed fluxes. Under the standard assumption of optically thin single temperature (T) sources the dust SED as represented by a power--law modified black body is subject to a strong degeneracy between T and the spectral index beta. The traditional non-hierarchical approaches, typically based on chi-square minimization, are severely limited by this degeneracy, as it produces an artificial anti-correlation between T and beta even with modest levels of observational noise. The hierarchical Bayesian method rigorously and self-consistently treats measurement uncertainties, including calibration and noise, resulting in more precise SED fits. As a result, the Bayesian fits do not produce any spurious anti-correlations between the SED parameters due to measurement uncertainty. We demonstrate that the Bayesian method is substantially more accurate than the chi-square fit in recovering the SED parameters, as well as the correlations between them. As an illustration, we apply our method to Herschel and sub millimeter ground-based observations of the star-forming Bok globule CB244. This source is a small, nearby molecular cloud containing a single low-mass protostar and a starless core. We find that T and beta are weakly positively correlated -- in contradiction with the chi-square fits, which indicate a T-beta anti-correlation from the same data-set. Additionally, in comparison to the chi-square fits the Bayesian SED parameter estimates exhibit a reduced range in values.Comment: 20 pages, 9 figures, ApJ format, revised version matches ApJ-accepted versio

    An Inequality Constrained SL/QP Method for Minimizing the Spectral Abscissa

    Full text link
    We consider a problem in eigenvalue optimization, in particular finding a local minimizer of the spectral abscissa - the value of a parameter that results in the smallest value of the largest real part of the spectrum of a matrix system. This is an important problem for the stabilization of control systems. Many systems require the spectra to lie in the left half plane in order for them to be stable. The optimization problem, however, is difficult to solve because the underlying objective function is nonconvex, nonsmooth, and non-Lipschitz. In addition, local minima tend to correspond to points of non-differentiability and locally non-Lipschitz behavior. We present a sequential linear and quadratic programming algorithm that solves a series of linear or quadratic subproblems formed by linearizing the surfaces corresponding to the largest eigenvalues. We present numerical results comparing the algorithms to the state of the art
    corecore