174,635 research outputs found

    Improvements on the k-center problem for uncertain data

    Full text link
    In real applications, there are situations where we need to model some problems based on uncertain data. This leads us to define an uncertain model for some classical geometric optimization problems and propose algorithms to solve them. In this paper, we study the kk-center problem, for uncertain input. In our setting, each uncertain point PiP_i is located independently from other points in one of several possible locations {Pi,1,,Pi,zi}\{P_{i,1},\dots, P_{i,z_i}\} in a metric space with metric dd, with specified probabilities and the goal is to compute kk-centers {c1,,ck}\{c_1,\dots, c_k\} that minimize the following expected cost Ecost(c1,,ck)=RΩprob(R)maxi=1,,nminj=1,kd(P^i,cj)Ecost(c_1,\dots, c_k)=\sum_{R\in \Omega} prob(R)\max_{i=1,\dots, n}\min_{j=1,\dots k} d(\hat{P}_i,c_j) here Ω\Omega is the probability space of all realizations R={P^1,,P^n}R=\{\hat{P}_1,\dots, \hat{P}_n\} of given uncertain points and prob(R)=i=1nprob(P^i).prob(R)=\prod_{i=1}^n prob(\hat{P}_i). In restricted assigned version of this problem, an assignment A:{P1,,Pn}{c1,,ck}A:\{P_1,\dots, P_n\}\rightarrow \{c_1,\dots, c_k\} is given for any choice of centers and the goal is to minimize EcostA(c1,,ck)=RΩprob(R)maxi=1,,nd(P^i,A(Pi)).Ecost_A(c_1,\dots, c_k)=\sum_{R\in \Omega} prob(R)\max_{i=1,\dots, n} d(\hat{P}_i,A(P_i)). In unrestricted version, the assignment is not specified and the goal is to compute kk centers {c1,,ck}\{c_1,\dots, c_k\} and an assignment AA that minimize the above expected cost. We give several improved constant approximation factor algorithms for the assigned versions of this problem in a Euclidean space and in a general metric space. Our results significantly improve the results of \cite{guh} and generalize the results of \cite{wang} to any dimension. Our approach is to replace a certain center point for each uncertain point and study the properties of these certain points. The proposed algorithms are efficient and simple to implement

    The Stellar-Dynamical Search for Supermassive Black Holes in Galactic Nuclei

    Full text link
    The robustness of stellar-dynamical black hole (BH) mass measurements is illustrated using 7 galaxies that have results from independent groups. Derived masses have remained constant to a factor of about 2 as spatial resolution has improved by factors of 2 - 330 and as the analysis has improved from spherical, isotropic models to axisymmetric, three-integral models. This gives us confidence that the masses are reliable and that the galaxies do not indulge in a wide variety of perverse orbital structures. Constraints on BH alternatives are also improving. In M31, Hubble Space Telescope (HST) spectroscopy shows that the central massive dark object (MDO) is in a tiny cluster of blue stars embedded in the P2 nucleus of the galaxy. The MDO must be less than about 0.06 arcsec in radius. M31 becomes the third galaxy in which dark clusters of brown dwarf stars or stellar remnants can be excluded. In our Galaxy, observations of almost-complete stellar orbits show that the MDO radius is less than about 0.0006 pc. Among BH alternatives, this excludes even neutrino balls. Therefore, measurements of central dark masses and the conclusion that these are BHs have both stood the test of time. Confidence in the BH paradigm for active galactic nuclei is correspondingly high. Compared to the radius of the BH sphere of influence, BHs are discovered at similar spatial resolution with HST as in ground-based work. The reason is that HST is used to observe more distant galaxies. Large, unbiased samples are accessible. As a result, HST has revolutionized the study of BH demographics.Comment: 20 pages, 5 figures + 2 tables embedded as figures, LaTeX2e with wrapping fixed, uses ociwsymp1.sty; To appear in "Carnegie Observatories Astrophysics Series, Vol. 1: Coevolution of Black Holes and Galaxies," ed. L. C. Ho (Cambridge: Cambridge Univ. Press

    Using Monte Carlo Search With Data Aggregation to Improve Robot Soccer Policies

    Full text link
    RoboCup soccer competitions are considered among the most challenging multi-robot adversarial environments, due to their high dynamism and the partial observability of the environment. In this paper we introduce a method based on a combination of Monte Carlo search and data aggregation (MCSDA) to adapt discrete-action soccer policies for a defender robot to the strategy of the opponent team. By exploiting a simple representation of the domain, a supervised learning algorithm is trained over an initial collection of data consisting of several simulations of human expert policies. Monte Carlo policy rollouts are then generated and aggregated to previous data to improve the learned policy over multiple epochs and games. The proposed approach has been extensively tested both on a soccer-dedicated simulator and on real robots. Using this method, our learning robot soccer team achieves an improvement in ball interceptions, as well as a reduction in the number of opponents' goals. Together with a better performance, an overall more efficient positioning of the whole team within the field is achieved

    Safe Approximations of Chance Constraints Using Historical Data

    Get PDF
    This paper proposes a new way to construct uncertainty sets for robust optimization. Our approach uses the available historical data for the uncertain parameters and is based on goodness-of-fit statistics. It guarantees that the probability that the uncertain constraint holds is at least the prescribed value. Compared to existing safe approximation methods for chance constraints, our approach directly uses the historical-data information and leads to tighter uncertainty sets and therefore to better objective values. This improvement is significant especially when the number of uncertain parameters is low. Other advantages of our approach are that it can handle joint chance constraints easily, it can deal with uncertain parameters that are dependent, and it can be extended to nonlinear inequalities. Several numerical examples illustrate the validity of our approach.robust optimization;chance constraint;phi-divergence;goodness-of-fit statistics

    Learning relational dynamics of stochastic domains for planning

    Get PDF
    Probabilistic planners are very flexible tools that can provide good solutions for difficult tasks. However, they rely on a model of the domain, which may be costly to either hand code or automatically learn for complex tasks. We propose a new learning approach that (a) requires only a set of state transitions to learn the model; (b) can cope with uncertainty in the effects; (c) uses a relational representation to generalize over different objects; and (d) in addition to action effects, it can also learn exogenous effects that are not related to any action, e.g., moving objects, endogenous growth and natural development. The proposed learning approach combines a multi-valued variant of inductive logic programming for the generation of candidate models, with an optimization method to select the best set of planning operators to model a problem. Finally, experimental validation is provided that shows improvements over previous work.Peer ReviewedPostprint (author's final draft
    corecore