94 research outputs found

    The Critical Radius in Sampling-based Motion Planning

    Full text link
    We develop a new analysis of sampling-based motion planning in Euclidean space with uniform random sampling, which significantly improves upon the celebrated result of Karaman and Frazzoli (2011) and subsequent work. Particularly, we prove the existence of a critical connection radius proportional to Θ(n−1/d){\Theta(n^{-1/d})} for nn samples and d{d} dimensions: Below this value the planner is guaranteed to fail (similarly shown by the aforementioned work, ibid.). More importantly, for larger radius values the planner is asymptotically (near-)optimal. Furthermore, our analysis yields an explicit lower bound of 1−O(n−1){1-O( n^{-1})} on the probability of success. A practical implication of our work is that asymptotic (near-)optimality is achieved when each sample is connected to only Θ(1){\Theta(1)} neighbors. This is in stark contrast to previous work which requires Θ(log⁡n){\Theta(\log n)} connections, that are induced by a radius of order (log⁡nn)1/d{\left(\frac{\log n}{n}\right)^{1/d}}. Our analysis is not restricted to PRM and applies to a variety of PRM-based planners, including RRG, FMT* and BTT. Continuum percolation plays an important role in our proofs. Lastly, we develop similar theory for all the aforementioned planners when constructed with deterministic samples, which are then sparsified in a randomized fashion. We believe that this new model, and its analysis, is interesting in its own right

    Intuitive Telemanipulation of Hyper-Redundant Snake Robots within Locomotion and Reorientation using Task-Priority Inverse Kinematics

    Get PDF
    Snake robots offer considerable potential for endoscopic interventions due to their ability to follow curvilinear paths. Telemanipulation is an open problem due to hyper-redundancy, as input devices only allow a specification of six degrees of freedom. Our work addresses this by presenting a unified telemanipulation strategy which enables follow-the-leader locomotion and reorientation keeping the shape change as small as possible. The basis for this is a novel shape-fitting approach for solving the inverse kinematics in only a few milliseconds. Shape fitting is performed by maximizing the similarity of two curves using Fréchet distance while simultaneously specifying the position and orientation of the end effector. Telemanipulation performance is investigated in a study in which 14 participants controlled a simulated snake robot to locomote into the target area. In a final validation, pivot reorientation within the target area is addressed.© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works

    Spatiotemporal Data Augmentation of MODIS-LANDSAT Water Bodies Using Generative Adversarial Networks

    Get PDF
    The monitoring of the shape and area of a water body is an essential component for many Earth science and Hydrological applications. For this purpose, these applications require remote sensing data which provides accurate analysis of the water bodies. In this thesis the same is being attempted, first, a model is created that can map the information from one kind of satellite that captures the data from a distance of 500m to another data that is captured by a different satellite at a distance of 30m. To achieve this, we first collected the data from both of the satellites and translated the data from one satellite to another using our proposed Hydro-GAN model. This translation gives us the accurate shape, boundary, and area of the water body. We evaluated the method by using several different similarity metrics for the area and the shape of the water body. The second part of this thesis involves augmenting the data that we obtained from the Hydro-GAN model with the original data and using this enriched data to predict the area of a water body in the future. We used the case study of Great Salt lake for this purpose. The results indicated that our proposed model was creating accurate area and shape of the water bodies. When we used our proposed model to generate data at a resolution of 30m it gave us better areal and shape accuracy. If we get more data at this resolution, we can use that data to better predict coastal lines, boundaries, as well as erosion monitoring

    Recent Advances in Image Restoration with Applications to Real World Problems

    Get PDF
    In the past few decades, imaging hardware has improved tremendously in terms of resolution, making widespread usage of images in many diverse applications on Earth and planetary missions. However, practical issues associated with image acquisition are still affecting image quality. Some of these issues such as blurring, measurement noise, mosaicing artifacts, low spatial or spectral resolution, etc. can seriously affect the accuracy of the aforementioned applications. This book intends to provide the reader with a glimpse of the latest developments and recent advances in image restoration, which includes image super-resolution, image fusion to enhance spatial, spectral resolution, and temporal resolutions, and the generation of synthetic images using deep learning techniques. Some practical applications are also included

    A Chronological Survey of Theoretical Advancements in Generative Adversarial Networks for Computer Vision

    Full text link
    Generative Adversarial Networks (GANs) have been workhorse generative models for last many years, especially in the research field of computer vision. Accordingly, there have been many significant advancements in the theory and application of GAN models, which are notoriously hard to train, but produce good results if trained well. There have been many a surveys on GANs, organizing the vast GAN literature from various focus and perspectives. However, none of the surveys brings out the important chronological aspect: how the multiple challenges of employing GAN models were solved one-by-one over time, across multiple landmark research works. This survey intends to bridge that gap and present some of the landmark research works on the theory and application of GANs, in chronological order

    Construction of a zero-coupon yield curve for the Nairobi Securities Exchange and its application in pricing derivatives

    Get PDF
    Thesis submitted in partial fulfillment of the requirements for the degree for PhD in Financial Mathematics at Strathmore UniversityYield curves are used to forecast interest rates for different products when their risk parameters are known, to calibrate no-arbitrage term structure models, and (mostly by investors) to detect whether there is arbitrage opportunity. By yield curve information, investors have opportunity of immunizing/hedging their investment portfolios against financial risks if they have to make an investment with some determined time of maturity. Private sector firms look at yields of different maturities and then choose their borrowing strategy. The differences in yields for long maturity and short maturities are an important indicator for central bank to use in monetary policy process. These differences may show the tightness of the government monetary policy and can be monitored to predict recession in coming years. A lot of research has been done in yield curve modeling and as we will see later in the thesis, most of the models developed had one major shortcoming: non differentiability at the interpolating knot points. The aim of this thesis is to construct a zero coupon yield curve for Nairobi Securities Exchange, and use the risk- free rates to price derivatives, with particular attention given to pricing coffee futures. This study looks into the three methods of constructing yield curves: by use of spline-based models, by interpolation and by using parametric models. We suggest an improvement in the interpolation methods used in the most celebrated spline-based model, monotonicity-preserving interpolation on r(t). We also use operator form of numerical differentiation to estimate the forward rates at the knot points, at which points the spot curve is non-differential. In derivative pricing, dynamical processes (Ito^ processes) are reviewed; and geometric Brownian motion is included, together with its properties and applications. Conventional techniques used in estimation of the drift and volatility parameters such as historical techniques are reviewed and discussed. We also use the Hough Transform, an artificial intelligence method, to detect market patterns and estimate the drift and volatility parameters simultaneously. We look at different ways of calculating derivative prices. For option pricing, we use different methods but apply Bellalahs models in calculation of the Coffee Futures prices because they incorporate an incomplete information parameter

    Latent data augmentation and modular structure for improved generalization

    Full text link
    This thesis explores the nature of generalization in deep learning and several settings in which it fails. In particular, deep neural networks can struggle to generalize in settings with limited data, insufficient supervision, challenging long-range dependencies, or complex structure and subsystems. This thesis explores the nature of these challenges for generalization in deep learning and presents several algorithms which seek to address these challenges. In the first article, we show how training with interpolated hidden states can improve generalization and calibration in deep learning. We also introduce a theory showing how our algorithm, which we call Manifold Mixup, leads to a flattening of the per-class hidden representations, which can be seen as a compression of the information in the hidden states. The second article is related to the first and shows how interpolated examples can be used for semi-supervised learning. In addition to interpolating the input examples, the model’s interpolated predictions are used as targets for these examples. This improves results on standard benchmarks as well as classic 2D toy problems for semi-supervised learning. The third article studies how a recurrent neural network can be divided into multiple modules with different parameters and well separated hidden states, as well as a competition mechanism restricting updating of the hidden states to a subset of the most relevant modules on a specific time-step. This improves systematic generalization when the pattern distribution is changed between the training and evaluation phases. It also improves generalization in reinforcement learning. In the fourth article, we show that attention can be used to control the flow of information between successive layers in deep networks. This allows each layer to only process the subset of the previously computed layers’ outputs which are most relevant. This improves generalization on relational reasoning tasks as well as standard benchmark classification tasks.Cette thĂšse explore la nature de la gĂ©nĂ©ralisation dans l’apprentissage en profondeur et plusieurs contextes dans lesquels elle Ă©choue. En particulier, les rĂ©seaux de neurones profonds peuvent avoir du mal Ă  se gĂ©nĂ©raliser dans des contextes avec des donnĂ©es limitĂ©es, une supervision insuffisante, des dĂ©pendances Ă  longue portĂ©e difficiles ou une structure et des sous-systĂšmes complexes. Cette thĂšse explore la nature de ces dĂ©fis pour la gĂ©nĂ©ralisation en apprentissage profond et prĂ©sente plusieurs algorithmes qui cherchent Ă  relever ces dĂ©fis. Dans le premier article, nous montrons comment l’entraĂźnement avec des Ă©tats cachĂ©s interpolĂ©s peut amĂ©liorer la gĂ©nĂ©ralisation et la calibration en apprentissage profond. Nous introduisons Ă©galement une thĂ©orie montrant comment notre algorithme, que nous appelons Manifold Mixup, conduit Ă  un aplatissement des reprĂ©sentations cachĂ©es par classe, ce qui peut ĂȘtre vu comme une compression de l’information dans les Ă©tats cachĂ©s. Le deuxiĂšme article est liĂ© au premier et montre comment des exemples interpolĂ©s peuvent ĂȘtre utilisĂ©s pour un apprentissage semi-supervisĂ©. Outre l’interpolation des exemples d’entrĂ©e, les prĂ©dictions interpolĂ©es du modĂšle sont utilisĂ©es comme cibles pour ces exemples. Cela amĂ©liore les rĂ©sultats sur les benchmarks standard ainsi que sur les problĂšmes de jouets 2D classiques pour l’apprentissage semi-supervisĂ©. Le troisiĂšme article Ă©tudie comment un rĂ©seau de neurones rĂ©current peut ĂȘtre divisĂ© en plusieurs modules avec des paramĂštres diffĂ©rents et des Ă©tats cachĂ©s bien sĂ©parĂ©s, ainsi qu’un mĂ©canisme de concurrence limitant la mise Ă  jour des Ă©tats cachĂ©s Ă  un sous-ensemble des modules les plus pertinents sur un pas de temps spĂ©cifique. . Cela amĂ©liore la gĂ©nĂ©ralisation systĂ©matique lorsque la distribution des modĂšles est modifiĂ©e entre les phases de entraĂźnement et d’évaluation. Il amĂ©liore Ă©galement la gĂ©nĂ©ralisation dans l’apprentissage par renforcement. Dans le quatriĂšme article, nous montrons que l’attention peut ĂȘtre utilisĂ©e pour contrĂŽler le flux d’informations entre les couches successives des rĂ©seaux profonds. Cela permet Ă  chaque couche de ne traiter que le sous-ensemble des sorties des couches prĂ©cĂ©demment calculĂ©es qui sont les plus pertinentes. Cela amĂ©liore la gĂ©nĂ©ralisation sur les tĂąches de raisonnement relationnel ainsi que sur les tĂąches de classification de rĂ©fĂ©rence standard

    Algorithm engineering in geometric network planning and data mining

    Get PDF
    The geometric nature of computational problems provides a rich source of solution strategies as well as complicating obstacles. This thesis considers three problems in the context of geometric network planning, data mining and spherical geometry. Geometric Network Planning: In the d-dimensional Generalized Minimum Manhattan Network problem (d-GMMN) one is interested in finding a minimum cost rectilinear network N connecting a given set of n pairs of points in ℝ^d such that each pair is connected in N via a shortest Manhattan path. The decision version of this optimization problem is known to be NP-hard. The best known upper bound is an O(log^{d+1} n) approximation for d>2 and an O(log n) approximation for 2-GMMN. In this work we provide some more insight in, whether the problem admits constant factor approximations in polynomial time. We develop two new algorithms, a `scale-diversity aware' algorithm with an O(D) approximation guarantee for 2-GMMN. Here D is a measure for the different `scales' that appear in the input, D ∈ O(log n) but potentially much smaller, depending on the problem instance. The other algorithm is based on a primal-dual scheme solving a more general, combinatorial problem - which we call Path Cover. On 2-GMMN it performs well in practice with good a posteriori, instance-based approximation guarantees. Furthermore, it can be extended to deal with obstacle avoiding requirements. We show that the Path Cover problem is at least as hard to approximate as the Hitting Set problem. Moreover, we show that solutions of the primal-dual algorithm are 4ω^2 approximations, where ω ≀ n denotes the maximum overlap of a problem instance. This implies that a potential proof of O(1)-inapproximability for 2-GMMN requires gadgets of many different scales and non-constant overlap in the construction. Geometric Map Matching for Heterogeneous Data: For a given sequence of location measurements, the goal of the geometric map matching is to compute a sequence of movements along edges of a spatially embedded graph which provides a `good explanation' for the measurements. The problem gets challenging as real world data, like traces or graphs from the OpenStreetMap project, does not exhibit homogeneous data quality. Graph details and errors vary in areas and each trace has changing noise and precision. Hence, formalizing what a `good explanation' is becomes quite difficult. We propose a novel map matching approach, which locally adapts to the data quality by constructing what we call dominance decompositions. While our approach is computationally more expensive than previous approaches, our experiments show that it allows for high quality map matching, even in presence of highly variable data quality without parameter tuning. Rational Points on the Unit Spheres: Each non-zero point in ℝ^d identifies a closest point x on the unit sphere S^{d-1}. We are interested in computing an Δ-approximation y ∈ ℚ^d for x, that is exactly on S^{d-1} and has low bit-size. We revise lower bounds on rational approximations and provide explicit spherical instances. We prove that floating-point numbers can only provide trivial solutions to the sphere equation in ℝ^2 and ℝ^3. However, we show how to construct a rational point with denominators of at most 10(d-1)/Δ^2 for any given Δ ∈ (0, 1/8], improving on a previous result. The method further benefits from algorithms for simultaneous Diophantine approximation. Our open-source implementation and experiments demonstrate the practicality of our approach in the context of massive data sets, geo-referenced by latitude and longitude values.Die geometrische Gestalt von Berechnungsproblemen liefert vielfĂ€ltige Lösungsstrategieen aber auch Hindernisse. Diese Arbeit betrachtet drei Probleme im Gebiet der geometrischen Netzwerk Planung, des geometrischen Data Minings und der sphĂ€rischen Geometrie. Geometrische Netzwerk Planung: Im d-dimensionalen Generalized Minimum Manhattan Network Problem (d-GMMN) möchte man ein gĂŒnstigstes geradliniges Netzwerk finden, welches jedes der gegebenen n Punktepaare aus ℝ^d mit einem kĂŒrzesten Manhattan Pfad verbindet. Es ist bekannt, dass die Entscheidungsvariante dieses Optimierungsproblems NP-hart ist. Die beste bekannte obere Schranke ist eine O(log^{d+1} n) Approximation fĂŒr d>2 und eine O(log n) Approximation fĂŒr 2-GMMN. Durch diese Arbeit geben wir etwas mehr Einblick, ob das Problem eine Approximation mit konstantem Faktor in polynomieller Zeit zulĂ€sst. Wir entwickeln zwei neue Algorithmen. Ersterer nutzt die `SkalendiversitĂ€t' und hat eine O(D) ApproximationsgĂŒte fĂŒr 2-GMMN. Hierbei ist D ein Maß fĂŒr die in Eingaben auftretende `Skalen'. D ∈ O(log n), aber potentiell deutlichen kleiner fĂŒr manche Problem Instanzen. Der andere Algorithmus basiert auf einem Primal-Dual Schema zur Lösung eines allgemeineren, kombinatorischen Problems, welches wir Path Cover nennen. Die praktisch erzielten a posteriori ApproximationsgĂŒten auf Instanzen von 2-GMMN verhalten sich gut. Dieser Algorithmus kann fĂŒr Netzwerk Planungsprobleme mit Hindernis-Anforderungen angepasst werden. Wir zeigen, dass das Path Cover Problem mindestens so schwierig zu approximieren ist wie das Hitting Set Problem. DarĂŒber hinaus zeigen wir, dass Lösungen des Primal-Dual Algorithmus 4ω^2 Approximationen sind, wobei ω ≀ n die maximale Überlappung einer Probleminstanz bezeichnet. Daher mĂŒssen potentielle Beweise, die konstante Approximationen fĂŒr 2-GMMN ausschließen möchten, Instanzen mit vielen unterschiedlichen Skalen und nicht konstanter Überlappung konstruieren. Geometrisches Map Matching fĂŒr heterogene Daten: FĂŒr eine gegebene Sequenz von Positionsmessungen ist das Ziel des geometrischen Map Matchings eine Sequenz von Bewegungen entlang Kanten eines rĂ€umlich eingebetteten Graphen zu finden, welche eine `gute ErklĂ€rung' fĂŒr die Messungen ist. Das Problem wird anspruchsvoll da reale Messungen, wie beispielsweise Traces oder Graphen des OpenStreetMap Projekts, keine homogene DatenqualitĂ€t aufweisen. Graphdetails und -fehler variieren in Gebieten und jeder Trace hat wechselndes Rauschen und Messgenauigkeiten. Zu formalisieren, was eine `gute ErklĂ€rung' ist, wird dadurch schwer. Wir stellen einen neuen Map Matching Ansatz vor, welcher sich lokal der DatenqualitĂ€t anpasst indem er sogenannte Dominance Decompositions berechnet. Obwohl unser Ansatz teurer im Rechenaufwand ist, zeigen unsere Experimente, dass qualitativ hochwertige Map Matching Ergebnisse auf hoch variabler DatenqualitĂ€t erzielbar sind ohne vorher Parameter kalibrieren zu mĂŒssen. Rationale Punkte auf EinheitssphĂ€ren: Jeder, von Null verschiedene, Punkt in ℝ^d identifiziert einen nĂ€chsten Punkt x auf der EinheitssphĂ€re S^{d-1}. Wir suchen eine Δ-Approximation y ∈ ℚ^d fĂŒr x zu berechnen, welche exakt auf S^{d-1} ist und niedrige Bit-GrĂ¶ĂŸe hat. Wir wiederholen untere Schranken an rationale Approximationen und liefern explizite, sphĂ€rische Instanzen. Wir beweisen, dass Floating-Point Zahlen nur triviale Lösungen zur SphĂ€ren-Gleichung in ℝ^2 und ℝ^3 liefern können. Jedoch zeigen wir die Konstruktion eines rationalen Punktes mit Nennern die maximal 10(d-1)/Δ^2 sind fĂŒr gegebene Δ ∈ (0, 1/8], was ein bekanntes Resultat verbessert. DarĂŒber hinaus profitiert die Methode von Algorithmen fĂŒr simultane Diophantische Approximationen. Unsere quell-offene Implementierung und die Experimente demonstrieren die PraktikabilitĂ€t unseres Ansatzes fĂŒr sehr große, durch geometrische LĂ€ngen- und Breitengrade referenzierte, DatensĂ€tze
    • 

    corecore