27 research outputs found

    Direct symbolic transformation from 3D cartesian into hyperboloidal coordinates

    Get PDF
    A direct transformation from cartesian coordinates into hyperboloidal coordinates (considered for biaxial hyperboloids) is presented in this paper. The transformation problem is reduced to the problem of finding the smallest positive root of a fourth degree polynomial. The analysis of the polynomial’s roots is performed by an algebraically complete stratification, based on symbolic techniques (mainly Sturm–Habicht sequences and its properties related to real root counting), of a planar region situated in the positive quadrant. Two approaches for computing the polynomial’s roots are presented, one based on the Merriman method and the other one obtained using the Computer Algebra System Maple. Our approach improves the solution presented in Feltens (2011) [1], being reduced to a few evaluations of symbolic expressions

    The global nonlinear stability of Minkowski space. Einstein equations, f(R)-modified gravity, and Klein-Gordon fields

    Full text link
    We study the initial value problem for two fundamental theories of gravity, that is, Einstein's field equations of general relativity and the (fourth-order) field equations of f(R) modified gravity. For both of these physical theories, we investigate the global dynamics of a self-gravitating massive matter field when an initial data set is prescribed on an asymptotically flat and spacelike hypersurface, provided these data are sufficiently close to data in Minkowski spacetime. Under such conditions, we thus establish the global nonlinear stability of Minkowski spacetime in presence of massive matter. In addition, we provide a rigorous mathematical validation of the f(R) theory based on analyzing a singular limit problem, when the function f(R) arising in the generalized Hilbert-Einstein functional approaches the scalar curvature function R of the standard Hilbert-Einstein functional. In this limit we prove that f(R) Cauchy developments converge to Einstein's Cauchy developments in the regime close to Minkowski space. Our proofs rely on a new strategy, introduced here and referred to as the Euclidian-Hyperboloidal Foliation Method (EHFM). This is a major extension of the Hyperboloidal Foliation Method (HFM) which we used earlier for the Einstein-massive field system but for a restricted class of initial data. Here, the data are solely assumed to satisfy an asymptotic flatness condition and be small in a weighted energy norm. These results for matter spacetimes provide a significant extension to the existing stability theory for vacuum spacetimes, developed by Christodoulou and Klainerman and revisited by Lindblad and Rodnianski.Comment: 127 pages. Selected chapters from a boo

    Design and Analysis of a Single-Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs)

    Full text link
    We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor’s projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances

    Implementation of standard testbeds for numerical relativity

    Get PDF
    We discuss results that have been obtained from the implementation of the initial round of testbeds for numerical relativity which was proposed in the first paper of the Apples with Apples Alliance. We present benchmark results for various codes which provide templates for analyzing the testbeds and to draw conclusions about various features of the codes. This allows us to sharpen the initial test specifications, design a new test and add theoretical insight.Comment: Corrected versio

    Numerical Relativity in 3+1 Dimensions

    Get PDF

    Enhancing 3D Visual Odometry with Single-Camera Stereo Omnidirectional Systems

    Full text link
    We explore low-cost solutions for efficiently improving the 3D pose estimation problem of a single camera moving in an unfamiliar environment. The visual odometry (VO) task -- as it is called when using computer vision to estimate egomotion -- is of particular interest to mobile robots as well as humans with visual impairments. The payload capacity of small robots like micro-aerial vehicles (drones) requires the use of portable perception equipment, which is constrained by size, weight, energy consumption, and processing power. Using a single camera as the passive sensor for the VO task satisfies these requirements, and it motivates the proposed solutions presented in this thesis. To deliver the portability goal with a single off-the-shelf camera, we have taken two approaches: The first one, and the most extensively studied here, revolves around an unorthodox camera-mirrors configuration (catadioptrics) achieving a stereo omnidirectional system (SOS). The second approach relies on expanding the visual features from the scene into higher dimensionalities to track the pose of a conventional camera in a photogrammetric fashion. The first goal has many interdependent challenges, which we address as part of this thesis: SOS design, projection model, adequate calibration procedure, and application to VO. We show several practical advantages for the single-camera SOS due to its complete 360-degree stereo views, that other conventional 3D sensors lack due to their limited field of view. Since our omnidirectional stereo (omnistereo) views are captured by a single camera, a truly instantaneous pair of panoramic images is possible for 3D perception tasks. Finally, we address the VO problem as a direct multichannel tracking approach, which increases the pose estimation accuracy of the baseline method (i.e., using only grayscale or color information) under the photometric error minimization as the heart of the “direct” tracking algorithm. Currently, this solution has been tested on standard monocular cameras, but it could also be applied to an SOS. We believe the challenges that we attempted to solve have not been considered previously with the level of detail needed for successfully performing VO with a single camera as the ultimate goal in both real-life and simulated scenes

    On a tensor-based finite element model for the analysis of shell structures

    Get PDF
    In the present study, we propose a computational model for the linear and nonlinear analysis of shell structures. We consider a tensor-based finite element formulation which describes the mathematical shell model in a natural and simple way by using curvilinear coordinates. To avoid membrane and shear locking we develop a family of high-order elements with Lagrangian interpolations. The approach is first applied to linear deformations based on a novel and consistent third-order shear deformation shell theory for bending of composite shells. No simplification other than the assumption of linear elastic material is made in the computation of stress resultants and material stiffness coefficients. They are integrated numerically without any approximation in the shifter. Therefore, the formulation is valid for thin and thick shells. A conforming high-order element was derived with 0 C continuity across the element boundaries. Next, we extend the formulation for the geometrically nonlinear analysis of multilayered composites and functionally graded shells. Again, Lagrangian elements with high-order interpolation polynomials are employed. The flexibility of these elements mitigates any locking problems. A first-order shell theory with seven parameters is derived with exact nonlinear deformations and under the framework of the Lagrangian description. This approach takes into account thickness changes and, therefore, 3D constitutive equations are utilized. Finally, extensive numerical simulations and comparisons of the present results with those found in the literature for typical benchmark problems involving isotropic and laminated composites, as well as functionally graded shells, are found to be excellent and show the validity of the developed finite element model. Moreover, the simplicity of this approach makes it attractive for future applications in different topics of research, such as contact mechanics, damage propagation and viscoelastic behavior of shells

    Método para el registro automático de imágenes basado en transformaciones proyectivas planas dependientes de las distancias y orientado a imágenes sin características comunes

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Ciencias Físicas, Departamento de Arquitectura de Computadores y Automática, leída el 18-12-2015Multisensory data fusion oriented to image-based application improves the accuracy, quality and availability of the data, and consequently, the performance of robotic systems, by means of combining the information of a scene acquired from multiple and different sources into a unified representation of the 3D world scene, which is more enlightening and enriching for the subsequent image processing, improving either the reliability by using the redundant information, or the capability by taking advantage of complementary information. Image registration is one of the most relevant steps in image fusion techniques. This procedure aims the geometrical alignment of two or more images. Normally, this process relies on feature-matching techniques, which is a drawback for combining sensors that are not able to deliver common features. For instance, in the combination of ToF and RGB cameras, the robust feature-matching is not reliable. Typically, the fusion of these two sensors has been addressed from the computation of the cameras calibration parameters for coordinate transformation between them. As a result, a low resolution colour depth map is provided. For improving the resolution of these maps and reducing the loss of colour information, extrapolation techniques are adopted. A crucial issue for computing high quality and accurate dense maps is the presence of noise in the depth measurement from the ToF camera, which is normally reduced by means of sensor calibration and filtering techniques. However, the filtering methods, implemented for the data extrapolation and denoising, usually over-smooth the data, reducing consequently the accuracy of the registration procedure...La fusión multisensorial orientada a aplicaciones de procesamiento de imágenes, conocida como fusión de imágenes, es una técnica que permite mejorar la exactitud, la calidad y la disponibilidad de datos de un entorno tridimensional, que a su vez permite mejorar el rendimiento y la operatividad de sistemas robóticos. Dicha fusión, se consigue mediante la combinación de la información adquirida por múltiples y diversas fuentes de captura de datos, la cual se agrupa del tal forma que se obtiene una mejor representación del entorno 3D, que es mucho más ilustrativa y enriquecedora para la implementación de métodos de procesamiento de imágenes. Con ello se consigue una mejora en la fiabilidad y capacidad del sistema, empleando la información redundante que ha sido adquirida por múltiples sensores. El registro de imágenes es uno de los procedimientos más importantes que componen la fusión de imágenes. El objetivo principal del registro de imágenes es la consecución de la alineación geométrica entre dos o más imágenes. Normalmente, este proceso depende de técnicas de búsqueda de patrones comunes entre imágenes, lo cual puede ser un inconveniente cuando se combinan sensores que no proporcionan datos con características similares. Un ejemplo de ello, es la fusión de cámaras de color de alta resolución (RGB) con cámaras de Tiempo de Vuelo de baja resolución (Time-of-Flight (ToF)), con las cuales no es posible conseguir una detección robusta de patrones comunes entre las imágenes capturadas por ambos sensores. Por lo general, la fusión entre estas cámaras se realiza mediante el cálculo de los parámetros de calibración de las mismas, que permiten realizar la trasformación homogénea entre ellas. Y como resultado de este xii Abstract procedimiento, se obtienen mapas de profundad y de color de baja resolución. Con el objetivo de mejorar la resolución de estos mapas y de evitar la pérdida de información de color, se utilizan diversas técnicas de extrapolación de datos. Un factor crucial a tomar en cuenta para la obtención de mapas de alta calidad y alta exactitud, es la presencia de ruido en las medidas de profundidad obtenidas por las cámaras ToF. Este problema, normalmente se reduce mediante la calibración de estos sensores y con técnicas de filtrado de datos. Sin embargo, las técnicas de filtrado utilizadas, tanto para la interpolación de datos, como para la reducción del ruido, suelen producir el sobre-alisamiento de los datos originales, lo cual reduce la exactitud del registro de imágenes...Sección Deptal. de Arquitectura de Computadores y Automática (Físicas)Fac. de Ciencias FísicasTRUEunpu
    corecore