4 research outputs found

    Finding Multiple New Optimal Locations in a Road Network

    Full text link
    We study the problem of optimal location querying for location based services in road networks, which aims to find locations for new servers or facilities. The existing optimal solutions on this problem consider only the cases with one new server. When two or more new servers are to be set up, the problem with minmax cost criteria, MinMax, becomes NP-hard. In this work we identify some useful properties about the potential locations for the new servers, from which we derive a novel algorithm for MinMax, and show that it is efficient when the number of new servers is small. When the number of new servers is large, we propose an efficient 3-approximate algorithm. We verify with experiments on real road networks that our solutions are effective and attains significantly better result quality compared to the existing greedy algorithms

    World Wide Web manuscript No. (will be inserted by the editor) The Min-dist Location Selection and Facility Replacement Queries

    Get PDF
    Abstract We propose and study a new type of location optimization problem, the min-dist location selection problem: given a set of clients and a set of existing facilities, we select a location from a given set of potential locations for establishing a new facility, so that the average distance between a client and her nearest facility is minimized. The problem has a wide range of applications in urban development simulation, massively multiplayer online games, and decision support systems. We also investigate a variant of the problem, where we consider replacing (instead of adding) a facility while achieving the same optimization goal. We call this variant the min-dist facility replacement problem. We explore two common approaches to location optimization problems and present methods based on those approaches for solving the min-dist location selection problem. However, those methods either need to maintain an extra index or fall short in efficiency. To address their drawbacks, we propose a novel method (named MND), which has very close performance to the fastest method but does not need an extra index. We then utilize the key idea behind MND to approach the min-dist facility replacement problem, which results in two algorithms names MSND and RID. We provide a detailed comparative cost analysis and conduct extensive experiments on the various algorithms. The results show that MND and RID outperform their competitors by orders of magnitude

    Towards realtime multiset correlation in large scale geosimulation

    No full text
    © 2013 Dr. Jianzhong QiGeosimulation is a branch of study that emphasizes the spatial structures and behaviors of objects in computer simulation. Its applications include urban computing, geographic information systems (GIS), and geographic theory validation, etc., where real world experiments are infeasible due to the spatio-temporal scales involved. Geosimulation provides a unique perspective of urban dynamics by modeling the interaction of individual objects such as people, business, and public facilities, at time scales approaching ``realtime''. As the scale of geosimulation grows, the costs of correlating the sets of objects for interaction simulation become significant, and this calls for efficient multiset correlation algorithms. We study three key techniques for efficient multiset correlation, including space-constraining, time-constraining, and dimensionality reduction. The space-constraining technique constrains multiset correlation based on spatial proximity. The intuition is that usually only objects that are close to each other can interact with each other, and need to be considered in correlation. As a typical study we investigate the min-dist location selection and facility replacement queries, which correlate three sets of points representing the clients, existing facilities, and potential locations, respectively. The min-dist location selection query finds a location among the set of potential locations for a new facility to be established at, so that the average distance between the clients and their respective nearest facilities is minimized. The min-dist facility replacement query has the same optimization goal, but finds a potential location to establish a new facility to replace an existing one. To constrain the query processing costs, we only compute the impact of choosing a potential location on its nearby clients, since those clients are the only ones whose respective nearest facilities might change because of the chosen potential location. The time-constraining technique constrains multiset correlation based on time relevance. The intuition is that a correlation relationship usually stays valid for a short period of time, during which we do not need to recompute the correlation. As a typical study we investigate the continuous intersection join query, which reports the intersecting objects from two sets of moving objects with non-zero extents at every timestamp. To constrain the query processing costs, the key idea is to compute the intersection for not only the current timestamp but also the near future according to the current object velocities, and only update the intersection if the object velocities have been updated. We design a cost model to help determine to which timestamp in the near future we compute the intersection, so as to achieve the best balance between the cost of computing the intersection for once and the total number of recomputing the intersection. The dimensionality reduction technique reduces the cost of multiset correlation by reducing data dimensionality. As a typical study we investigate mapping based dimensionality reduction for similarity searches on time series data, which correlate the time series based on similarity. We treat every time series as a point in a high dimensional space and map it to a low dimensional space, using its distances to a small number of reference data points in the original high dimensional space as the coordinates. We then index the mapped time series in the low dimensional space, which allows efficient processing of similarity searches. We conduct extensive experiments on our proposed techniques. The results confirm the superiority of our techniques over the baseline approaches
    corecore