18 research outputs found

    Applying spatial reasoning to topographical data with a grounded geographical ontology

    Get PDF
    Grounding an ontology upon geographical data has been pro- posed as a method of handling the vagueness in the domain more effectively. In order to do this, we require methods of reasoning about the spatial relations between the regions within the data. This stage can be computationally expensive, as we require information on the location of points in relation to each other. This paper illustrates how using knowledge about regions allows us to reduce the computation required in an efficient and easy to understand manner. Further, we show how this system can be implemented in co-ordination with segmented data to reason abou

    Space sweep solves intersection of two convex polyhedra elegantly

    Get PDF
    Plane-sweep algorithms form a fairly general approach to two-dimensional problems of computational geometry. No corresponding three-dimensional space-sweep algorithms for geometric problems in 3-space are known, however. We derive concepts for such space-sweep algorithms that yield an elegant solution to the problem of solving any set operation (union, intersection, ...) of two convex polyhedra. Moreover, our solution matches the best known time bound of O(n log n) where n is the combined number of corners of the two polyhedra

    Geometric computing and uniform grid technique

    Get PDF
    If computational geometry should play an important role in the professional environment (e.g. graphics and robotics), the data structures it advocates should be readily implemented and the algorithms efficient. In the paper, the uniform grid and a diverse set of geometric algorithms that are all based on it, are reviewed. The technique, invented by the second author, is a flat, and thus non-hierarchical, grid whose resolution adapts to the data. It is especially suitable for telling efficiently which pairs of a large number of short edges intersect. Several of the algorithms presented here exist as working programs (among which is a visible surface program for polyhedra) and can handle large data sets (i.e. many thousands of geometric objects). Furthermore, the uniform grid is appropriate for parallel processing; the parallel implementation presented gives very good speed-up results. © 1989

    Smoothing the gap between NP and ER

    Get PDF
    We study algorithmic problems that belong to the complexity class of the existential theory of the reals (ER). A problem is ER-complete if it is as hard as the problem ETR and if it can be written as an ETR formula. Traditionally, these problems are studied in the real RAM, a model of computation that assumes that the storage and comparison of real-valued numbers can be done in constant space and time, with infinite precision. The complexity class ER is often called a real RAM analogue of NP, since the problem ETR can be viewed as the real-valued variant of SAT. In this paper we prove a real RAM analogue to the Cook-Levin theorem which shows that ER membership is equivalent to having a verification algorithm that runs in polynomial-time on a real RAM. This gives an easy proof of ER-membership, as verification algorithms on a real RAM are much more versatile than ETR-formulas. We use this result to construct a framework to study ER-complete problems under smoothed analysis. We show that for a wide class of ER-complete problems, its witness can be represented with logarithmic input-precision by using smoothed analysis on its real RAM verification algorithm. This shows in a formal way that the boundary between NP and ER (formed by inputs whose solution witness needs high input-precision) consists of contrived input. We apply our framework to well-studied ER-complete recognition problems which have the exponential bit phenomenon such as the recognition of realizable order types or the Steinitz problem in fixed dimension.Comment: 31 pages, 11 figures, FOCS 2020, SICOMP 202

    Galerkin projection of discrete fields via supermesh construction

    No full text
    Interpolation of discrete FIelds arises frequently in computational physics. This thesis focuses on the novel implementation and analysis of Galerkin projection, an interpolation technique with three principal advantages over its competitors: it is optimally accurate in the L2 norm, it is conservative, and it is well-defined in the case of spaces of discontinuous functions. While these desirable properties have been known for some time, the implementation of Galerkin projection is challenging; this thesis reports the first successful general implementation. A thorough review of the history, development and current frontiers of adaptive remeshing is given. Adaptive remeshing is the primary motivation for the development of Galerkin projection, as its use necessitates the interpolation of discrete fields. The Galerkin projection is discussed and the geometric concept necessary for its implementation, the supermesh, is introduced. The efficient local construction of the supermesh of two meshes by the intersection of the elements of the input meshes is then described. Next, the element-element association problem of identifying which elements from the input meshes intersect is analysed. With efficient algorithms for its construction in hand, applications of supermeshing other than Galerkin projections are discussed, focusing on the computation of diagnostics of simulations which employ adaptive remeshing. Examples demonstrating the effectiveness and efficiency of the presented algorithms are given throughout. The thesis closes with some conclusions and possibilities for future work

    Efficient Algorithms for Coastal Geographic Problems

    Get PDF
    The increasing performance of computers has made it possible to solve algorithmically problems for which manual and possibly inaccurate methods have been previously used. Nevertheless, one must still pay attention to the performance of an algorithm if huge datasets are used or if the problem iscomputationally difficult. Two geographic problems are studied in the articles included in this thesis. In the first problem the goal is to determine distances from points, called study points, to shorelines in predefined directions. Together with other in-formation, mainly related to wind, these distances can be used to estimate wave exposure at different areas. In the second problem the input consists of a set of sites where water quality observations have been made and of the results of the measurements at the different sites. The goal is to select a subset of the observational sites in such a manner that water quality is still measured in a sufficient accuracy when monitoring at the other sites is stopped to reduce economic cost. Most of the thesis concentrates on the first problem, known as the fetch length problem. The main challenge is that the two-dimensional map is represented as a set of polygons with millions of vertices in total and the distances may also be computed for millions of study points in several directions. Efficient algorithms are developed for the problem, one of them approximate and the others exact except for rounding errors. The solutions also differ in that three of them are targeted for serial operation or for a small number of CPU cores whereas one, together with its further developments, is suitable also for parallel machines such as GPUs.Tietokoneiden suorituskyvyn kasvaminen on tehnyt mahdolliseksi ratkaista algoritmisesti ongelmia, joita on aiemmin tarkasteltu paljon ihmistyötä vaativilla, mahdollisesti epätarkoilla, menetelmillä. Algoritmien suorituskykyyn on kuitenkin toisinaan edelleen kiinnitettävä huomiota lähtömateriaalin suuren määrän tai ongelman laskennallisen vaikeuden takia. Väitöskirjaansisältyvissäartikkeleissatarkastellaankahtamaantieteellistä ongelmaa. Ensimmäisessä näistä on määritettävä etäisyyksiä merellä olevista pisteistä lähimpään rantaviivaan ennalta määrätyissä suunnissa. Etäisyyksiä ja tuulen voimakkuutta koskevien tietojen avulla on mahdollista arvioida esimerkiksi aallokon voimakkuutta. Toisessa ongelmista annettuna on joukko tarkkailuasemia ja niiltä aiemmin kerättyä tietoa erilaisista vedenlaatua kuvaavista parametreista kuten sameudesta ja ravinteiden määristä. Tehtävänä on valita asemajoukosta sellainen osa joukko, että vedenlaatua voidaan edelleen tarkkailla riittävällä tarkkuudella, kun mittausten tekeminen muilla havaintopaikoilla lopetetaan kustannusten säästämiseksi. Väitöskirja keskittyy pääosin ensimmäisen ongelman, suunnattujen etäisyyksien, ratkaisemiseen. Haasteena on se, että tarkasteltava kaksiulotteinen kartta kuvaa rantaviivan tyypillisesti miljoonista kärkipisteistä koostuvana joukkonapolygonejajaetäisyyksiäonlaskettavamiljoonilletarkastelupisteille kymmenissä eri suunnissa. Ongelmalle kehitetään tehokkaita ratkaisutapoja, joista yksi on likimääräinen, muut pyöristysvirheitä lukuun ottamatta tarkkoja. Ratkaisut eroavat toisistaan myös siinä, että kolme menetelmistä on suunniteltu ajettavaksi sarjamuotoisesti tai pienellä määrällä suoritinytimiä, kun taas yksi menetelmistä ja siihen tehdyt parannukset soveltuvat myös voimakkaasti rinnakkaisille laitteille kuten GPU:lle. Vedenlaatuongelmassa annetulla asemajoukolla on suuri määrä mahdollisia osajoukkoja. Lisäksi tehtävässä käytetään aikaa vaativia operaatioita kuten lineaarista regressiota, mikä entisestään rajoittaa sitä, kuinka monta osajoukkoa voidaan tutkia. Ratkaisussa käytetäänkin heuristiikkoja, jotkaeivät välttämättä tuota optimaalista lopputulosta.Siirretty Doriast
    corecore