18 research outputs found

    Smoothing the gap between NP and ER

    Get PDF
    We study algorithmic problems that belong to the complexity class of the existential theory of the reals (ER). A problem is ER-complete if it is as hard as the problem ETR and if it can be written as an ETR formula. Traditionally, these problems are studied in the real RAM, a model of computation that assumes that the storage and comparison of real-valued numbers can be done in constant space and time, with infinite precision. The complexity class ER is often called a real RAM analogue of NP, since the problem ETR can be viewed as the real-valued variant of SAT. In this paper we prove a real RAM analogue to the Cook-Levin theorem which shows that ER membership is equivalent to having a verification algorithm that runs in polynomial-time on a real RAM. This gives an easy proof of ER-membership, as verification algorithms on a real RAM are much more versatile than ETR-formulas. We use this result to construct a framework to study ER-complete problems under smoothed analysis. We show that for a wide class of ER-complete problems, its witness can be represented with logarithmic input-precision by using smoothed analysis on its real RAM verification algorithm. This shows in a formal way that the boundary between NP and ER (formed by inputs whose solution witness needs high input-precision) consists of contrived input. We apply our framework to well-studied ER-complete recognition problems which have the exponential bit phenomenon such as the recognition of realizable order types or the Steinitz problem in fixed dimension.Comment: 31 pages, 11 figures, FOCS 2020, SICOMP 202

    27th Annual European Symposium on Algorithms: ESA 2019, September 9-11, 2019, Munich/Garching, Germany

    Get PDF

    Registration of histology and magnetic resonance imaging of the brain

    Get PDF
    Combining histology and non-invasive imaging has been attracting the attention of the medical imaging community for a long time, due to its potential to correlate macroscopic information with the underlying microscopic properties of tissues. Histology is an invasive procedure that disrupts the spatial arrangement of the tissue components but enables visualisation and characterisation at a cellular level. In contrast, macroscopic imaging allows non-invasive acquisition of volumetric information but does not provide any microscopic details. Through the establishment of spatial correspondences obtained via image registration, it is possible to compare micro- and macroscopic information and to recover the original histological arrangement in three dimensions. In this thesis, I present: (i) a survey of the literature relative to methods for histology reconstruction with and without the help of 3D medical imaging; (ii) a graph-theoretic method for histology volume reconstruction from sets of 2D sections, without external information; (iii) a method for multimodal 2D linear registration between histology and MRI based on partial matching of shape-informative boundaries

    Efficient Algorithms for Coastal Geographic Problems

    Get PDF
    The increasing performance of computers has made it possible to solve algorithmically problems for which manual and possibly inaccurate methods have been previously used. Nevertheless, one must still pay attention to the performance of an algorithm if huge datasets are used or if the problem iscomputationally difficult. Two geographic problems are studied in the articles included in this thesis. In the first problem the goal is to determine distances from points, called study points, to shorelines in predefined directions. Together with other in-formation, mainly related to wind, these distances can be used to estimate wave exposure at different areas. In the second problem the input consists of a set of sites where water quality observations have been made and of the results of the measurements at the different sites. The goal is to select a subset of the observational sites in such a manner that water quality is still measured in a sufficient accuracy when monitoring at the other sites is stopped to reduce economic cost. Most of the thesis concentrates on the first problem, known as the fetch length problem. The main challenge is that the two-dimensional map is represented as a set of polygons with millions of vertices in total and the distances may also be computed for millions of study points in several directions. Efficient algorithms are developed for the problem, one of them approximate and the others exact except for rounding errors. The solutions also differ in that three of them are targeted for serial operation or for a small number of CPU cores whereas one, together with its further developments, is suitable also for parallel machines such as GPUs.Tietokoneiden suorituskyvyn kasvaminen on tehnyt mahdolliseksi ratkaista algoritmisesti ongelmia, joita on aiemmin tarkasteltu paljon ihmistyötä vaativilla, mahdollisesti epätarkoilla, menetelmillä. Algoritmien suorituskykyyn on kuitenkin toisinaan edelleen kiinnitettävä huomiota lähtömateriaalin suuren määrän tai ongelman laskennallisen vaikeuden takia. Väitöskirjaansisältyvissäartikkeleissatarkastellaankahtamaantieteellistä ongelmaa. Ensimmäisessä näistä on määritettävä etäisyyksiä merellä olevista pisteistä lähimpään rantaviivaan ennalta määrätyissä suunnissa. Etäisyyksiä ja tuulen voimakkuutta koskevien tietojen avulla on mahdollista arvioida esimerkiksi aallokon voimakkuutta. Toisessa ongelmista annettuna on joukko tarkkailuasemia ja niiltä aiemmin kerättyä tietoa erilaisista vedenlaatua kuvaavista parametreista kuten sameudesta ja ravinteiden määristä. Tehtävänä on valita asemajoukosta sellainen osa joukko, että vedenlaatua voidaan edelleen tarkkailla riittävällä tarkkuudella, kun mittausten tekeminen muilla havaintopaikoilla lopetetaan kustannusten säästämiseksi. Väitöskirja keskittyy pääosin ensimmäisen ongelman, suunnattujen etäisyyksien, ratkaisemiseen. Haasteena on se, että tarkasteltava kaksiulotteinen kartta kuvaa rantaviivan tyypillisesti miljoonista kärkipisteistä koostuvana joukkonapolygonejajaetäisyyksiäonlaskettavamiljoonilletarkastelupisteille kymmenissä eri suunnissa. Ongelmalle kehitetään tehokkaita ratkaisutapoja, joista yksi on likimääräinen, muut pyöristysvirheitä lukuun ottamatta tarkkoja. Ratkaisut eroavat toisistaan myös siinä, että kolme menetelmistä on suunniteltu ajettavaksi sarjamuotoisesti tai pienellä määrällä suoritinytimiä, kun taas yksi menetelmistä ja siihen tehdyt parannukset soveltuvat myös voimakkaasti rinnakkaisille laitteille kuten GPU:lle. Vedenlaatuongelmassa annetulla asemajoukolla on suuri määrä mahdollisia osajoukkoja. Lisäksi tehtävässä käytetään aikaa vaativia operaatioita kuten lineaarista regressiota, mikä entisestään rajoittaa sitä, kuinka monta osajoukkoa voidaan tutkia. Ratkaisussa käytetäänkin heuristiikkoja, jotkaeivät välttämättä tuota optimaalista lopputulosta.Siirretty Doriast

    Galerkin projection of discrete fields via supermesh construction

    No full text
    Interpolation of discrete FIelds arises frequently in computational physics. This thesis focuses on the novel implementation and analysis of Galerkin projection, an interpolation technique with three principal advantages over its competitors: it is optimally accurate in the L2 norm, it is conservative, and it is well-defined in the case of spaces of discontinuous functions. While these desirable properties have been known for some time, the implementation of Galerkin projection is challenging; this thesis reports the first successful general implementation. A thorough review of the history, development and current frontiers of adaptive remeshing is given. Adaptive remeshing is the primary motivation for the development of Galerkin projection, as its use necessitates the interpolation of discrete fields. The Galerkin projection is discussed and the geometric concept necessary for its implementation, the supermesh, is introduced. The efficient local construction of the supermesh of two meshes by the intersection of the elements of the input meshes is then described. Next, the element-element association problem of identifying which elements from the input meshes intersect is analysed. With efficient algorithms for its construction in hand, applications of supermeshing other than Galerkin projections are discussed, focusing on the computation of diagnostics of simulations which employ adaptive remeshing. Examples demonstrating the effectiveness and efficiency of the presented algorithms are given throughout. The thesis closes with some conclusions and possibilities for future work

    Geometric algorithms for algebraic curves and surfaces

    Get PDF
    This work presents novel geometric algorithms dealing with algebraic curves and surfaces of arbitrary degree. These algorithms are exact and complete — they return the mathematically true result for all input instances. Efficiency is achieved by cutting back expensive symbolic computation and favoring combinatorial and adaptive numerical methods instead, without spoiling exactness in the overall result. We present an algorithm for computing planar arrangements induced by real algebraic curves. We show its efficiency both in theory by a complexity analysis, as well as in practice by experimental comparison with related methods. For the latter, our solution has been implemented in the context of the Cgal library. The results show that it constitutes the best current exact implementation available for arrangements as well as for the related problem of computing the topology of one algebraic curve. The algorithm is also applied to related problems, such as arrangements of rotated curves, and arrangments embedded on a parameterized surface. In R3, we propose a new method to compute an isotopic triangulation of an algebraic surface. This triangulation is based on a stratification of the surface, which reveals topological and geometric information. Our implementation is the first for this problem that makes consequent use of numerical methods, and still yields the exact topology of the surface.Diese Arbeit stellt neue Algorithmen für algebraische Kurven und Flächen von beliebigem Grad vor. Diese Algorithmen liefern für alle Eingaben das mathematisch korrekte Ergebnis. Wir erreichen Effizienz, indem wir aufwendige symbolische Berechnungen weitesgehend vermeiden, und stattdessen kombinatorische und adaptive numerische Methoden einsetzen, ohne die Exaktheit des Resultats zu zerstören. Der Hauptbeitrag ist ein Algorithmus zur Berechnung von planaren Arrangements, die durch reelle algebraische Kurven induziert sind. Wir weisen die Effizienz des Verfahrens sowohl theoretisch durch eine Komplexitätsanalyse, als auch praktisch durch experimentelle Vergleiche nach. Dazu haben wir unser Verfahren im Rahmen der Softwarebibliothek Cgal implementiert. Die Resultate belegen, dass wir die zur Zeit beste verfügbare exakte Software bereitstellen. Der Algorithmus wird zur Arrangementberechnung rotierter Kurven, oder für Arrangements auf parametrisierten Oberflächen eingesetzt. Im R3 geben wir ein neues Verfahren zur Berechnung einer isotopen Triangulierung einer algebraischen Oberfläche an. Diese Triangulierung basiert auf einer Stratifizierung der Oberfläche, die topologische und geometrische Informationen berechnet. Unsere Implementierung ist die erste für dieses Problem, welche numerische Methoden konsequent einsetzt, und dennoch die exakte Topologie der Oberfläche liefert
    corecore