38 research outputs found

    Exact from-region visibility culling

    Get PDF
    To pre-process a scene for the purpose of visibility culling during walkthroughs it is necessary to solve visibility from all the elements of a finite partition of viewpoint space. Many conservative and approximate solutions have been developed that solve for visibility rapidly. The idealised exact solution for general 3D scenes has often been regarded as computationally intractable. Our exact algorithm for finding the visible polygons in a scene from a region is a computationally tractable pre-process that can handle scenes of the order of millions of polygons. The essence of our idea is to represent 3-D polygons and the stabbing lines connecting them in a 5-D Euclidean space derived from PlĂŒcker space and then to perform geometric subtractions of occluded lines from the set of potential stabbing lines.We have built a query architecture around this query algorithm that allows for its practical application to large scenes. We have tested the algorithm on two different types of scene: despite a large constant computational overhead, it is highly scalable, with a time dependency close to linear in the output produced

    A Low Dimensional Framework for Exact Polygon-to-Polygon Occlusion Queries

    Get PDF
    Despite the importance of from-region visibility computation in computer graphics, efficient analytic methods are still lacking in the general 3D case. Recently, different algorithms have appeared that maintain occlusion as a complex of polytopes in PlĂŒcker space. However, they suffer from high implementation complexity, as well as high computational and memory costs, limiting their usefulness in practice. In this paper, we present a new algorithm that simplifies implementation and computation by operating only on the skeletons of the polyhedra instead of the multi-dimensional face lattice usually used for exact occlusion queries in 3D. This algorithm is sensitive to complexity of the silhouette of each occluding object, rather than the entire polygonal mesh of each object. An intelligent feedback mechanism is presented that greatly enhances early termination by searching for apertures between query polygons. We demonstrate that our technique is several times faster than the state of the art

    Exact From-region Visibility Culling

    Get PDF
    To pre-process a scene for the purpose of visibility culling during walkthroughs it is necessary to solve visibility from all the elements of a finite partition of viewpoint space. Many conservative and approximate solutions have been developed that solve for visibility rapidly. The idealised exact solution for general 3D scenes has often been regarded as computationally intractable. Our exact algorithm for finding the visible polygons in a scene from a region is a computationally tractable pre-process that can handle scenes of the order of millions of polygons. The essence of our idea is to represent 3-D polygons and the stabbing lines connecting them in a 5-D Euclidean space derived from PlĂŒcker space and then to perform geometric subtractions of occluded lines from the set of potential stabbing lines.We have built a query architecture around this query algorithm that allows for its practical application to large scenes. We have tested the algorithm on two different types of scene: despite a large constant computational overhead, it is highly scalable, with a time dependency close to linear in the output produced

    Shadow Computations using Robust Epsilon Visibility

    Get PDF
    Analytic visibility algorithms, for example methods which compute a subdivided mesh to represent shadows, are notoriously unrobust and hard to use in practice. We present a new method based on a generalized definition of extremal stabbing lines, which are the extremities of shadow boundaries. We treat scenes containing multiple edges or vertices in degenerate configurations, (e.g., collinear or coplanar). We introduce a robust epsilon method to determine whether each generalized extremal stabbing line is blocked, or is touched by these scene elements, and thus added to the line's generators. We develop robust blocker predicates for polygons which are smaller than epsilon. For larger values, small shadow features merge and eventually disappear. We can thus robustly connect generalized extremal stabbing lines in degenerate scenes to form shadow boundaries. We show that our approach is consistent, and that shadow boundary connectivity is preserved when features merge. We have implemented our algorithm, and show that we can robustly compute analytic shadow boundaries to the precision of our chosen epsilon threshold for non-trivial models, containing numerous degeneracies

    Underwater 3D Reconstruction Based on Physical Models for Refraction and Underwater Light Propagation

    Get PDF
    In recent years, underwater imaging has gained a lot of popularity partly due to the availability of off-the-shelf consumer cameras, but also due to a growing interest in the ocean floor by science and industry. Apart from capturing single images or sequences, the application of methods from the area of computer vision has gained interest as well. However, water affects image formation in two major ways. First, while traveling through the water, light is attenuated and scattered, depending on the light's wavelength causing the typical strong green or blue hue in underwater images. Second, cameras used in underwater scenarios need to be confined in an underwater housing, viewing the scene through a flat or dome-shaped glass port. The inside of the housing is filled with air. Consequently, the light entering the housing needs to pass a water-glass interface, then a glass-air interface, thus is refracted twice, affecting underwater image formation geometrically. In classic Structure-from-Motion (SfM) approaches, the perspective camera model is usually assumed, however, it can be shown that it becomes invalid due to refraction in underwater scenarios. Therefore, this thesis proposes an adaptation of the SfM algorithm to underwater image formation with flat port underwater housings, i.e. introduces a method where refraction at the underwater housing is modeled explicitly. This includes a calibration approach, algorithms for relative and absolute pose estimation, an efficient, non-linear error function that is utilized in bundle adjustment, and a refractive plane sweep algorithm. Finally, if calibration data for an underwater light propagation model exists, the dense depth maps can be used to correct texture colors. Experiments with a perspective and the proposed refractive approach to 3D reconstruction revealed that the perspective approach does indeed suffer from a systematic model error depending on the distance between camera and glass and a possible tilt of the glass with respect to the image sensor. The proposed method shows no such systematic error and thus provides more accurate results for underwater image sequences

    Compression, pose tracking, and halftoning

    Get PDF
    In this thesis, we discuss image compression, pose tracking, and halftoning. Although these areas seem to be unrelated at first glance, they can be connected through video coding as application scenario. Our first contribution is an image compression algorithm based on a rectangular subdivision scheme which stores only a small subsets of the image points. From these points, the remained of the image is reconstructed using partial differential equations. Afterwards, we present a pose tracking algorithm that is able to follow the 3-D position and orientation of multiple objects simultaneously. The algorithm can deal with noisy sequences, and naturally handles both occlusions between different objects, as well as occlusions occurring in kinematic chains. Our third contribution is a halftoning algorithm based on electrostatic principles, which can easily be adjusted to different settings through a number of extensions. Examples include modifications to handle varying dot sizes or hatching. In the final part of the thesis, we show how to combine our image compression, pose tracking, and halftoning algorithms to novel video compression codecs. In each of these four topics, our algorithms yield excellent results that outperform those of other state-of-the-art algorithms.In dieser Arbeit werden die auf den ersten Blick vollkommen voneinander unabhĂ€ngig erscheinenden Bereiche Bildkompression, 3D-PosenschĂ€tzung und Halbtonverfahren behandelt und im Bereich der Videokompression sinnvoll zusammengefĂŒhrt. Unser erster Beitrag ist ein Bildkompressionsalgorithmus, der auf einem rechteckigen Unterteilungsschema basiert. Dieser Algorithmus speichert nur eine kleine Teilmenge der im Bild vorhandenen Punkte, wĂ€hrend die restlichen Punkte mittels partieller Differentialgleichungen rekonstruiert werden. Danach stellen wir ein PosenschĂ€tzverfahren vor, welches die 3D-Position und Ausrichtung von mehreren Objekten anhand von Bilddaten gleichzeitig verfolgen kann. Unser Verfahren funktioniert bei verrauschten Videos und im Falle von ObjektĂŒberlagerungen. Auch Verdeckungen innerhalb einer kinematischen Kette werden natĂŒrlich behandelt. Unser dritter Beitrag ist ein Halbtonverfahren, das auf elektrostatischen Prinzipien beruht. Durch eine Reihe von Erweiterungen kann dieses Verfahren flexibel an verschiedene Szenarien angepasst werden. So ist es beispielsweise möglich, verschiedene PunktgrĂ¶ĂŸen zu verwenden oder Schraffuren zu erzeugen. Der letzte Teil der Arbeit zeigt, wie man unseren Bildkompressionsalgorithmus, unser PosenschĂ€tzverfahren und unser Halbtonverfahren zu neuen Videokompressionsalgorithmen kombinieren kann. Die fĂŒr jeden der vier Themenbereiche entwickelten Verfahren erzielen hervorragende Resultate, welche die Ergebnisse anderer moderner Verfahren ĂŒbertreffen

    Perception de la géométrie de l'environnement pour la navigation autonome

    Get PDF
    Le but de de la recherche en robotique mobile est de donner aux robots la capacité d'accomplir des missions dans un environnement qui n'est pas parfaitement connu. Mission, qui consiste en l'exécution d'un certain nombre d'actions élémentaires (déplacement, manipulation d'objets...) et qui nécessite une localisation précise, ainsi que la construction d'un bon modÚle géométrique de l'environnement, a partir de l'exploitation de ses propres capteurs, des capteurs externes, de l'information provenant d'autres robots et de modÚle existant, par exemple d'un systÚme d'information géographique. L'information commune est la géométrie de l'environnement. La premiÚre partie du manuscrit couvre les différents méthodes d'extraction de l'information géométrique. La seconde partie présente la création d'un modÚle géométrique en utilisant un graphe, ainsi qu'une méthode pour extraire de l'information du graphe et permettre au robot de se localiser dans l'environnement.The goal of the mobile robotic research is to give robots the capability to accomplish missions in an environment that might be unknown. To accomplish his mission, the robot need to execute a given set of elementary actions (movement, manipulation of objects...) which require an accurate localisation of the robot, as well as a the construction of good geometric model of the environment. Thus, a robot will need to take the most out of his own sensors, of external sensors, of information coming from an other robot and of existing model coming from a Geographic Information System. The common information is the geometry of the environment. The first part of the presentation will be about the different methods to extract geometric information. The second part will be about the creation of the geometric model using a graph structure, along with a method to retrieve information in the graph to allow the robot to localise itself in the environment

    High-level environment representations for mobile robots

    Get PDF
    In most robotic applications we are faced with the problem of building a digital representation of the environment that allows the robot to autonomously complete its tasks. This internal representation can be used by the robot to plan a motion trajectory for its mobile base and/or end-effector. For most man-made environments we do not have a digital representation or it is inaccurate. Thus, the robot must have the capability of building it autonomously. This is done by integrating into an internal data structure incoming sensor measurements. For this purpose, a common solution consists in solving the Simultaneous Localization and Mapping (SLAM) problem. The map obtained by solving a SLAM problem is called ``metric'' and it describes the geometric structure of the environment. A metric map is typically made up of low-level primitives (like points or voxels). This means that even though it represents the shape of the objects in the robot workspace it lacks the information of which object a surface belongs to. Having an object-level representation of the environment has the advantage of augmenting the set of possible tasks that a robot may accomplish. To this end, in this thesis we focus on two aspects. We propose a formalism to represent in a uniform manner 3D scenes consisting of different geometric primitives, including points, lines and planes. Consequently, we derive a local registration and a global optimization algorithm that can exploit this representation for robust estimation. Furthermore, we present a Semantic Mapping system capable of building an \textit{object-based} map that can be used for complex task planning and execution. Our system exploits effective reconstruction and recognition techniques that require no a-priori information about the environment and can be used under general conditions

    Interactive volume ray tracing

    Get PDF
    Die Visualisierung von volumetrischen Daten ist eine der interessantesten, aber sicherlich auch schwierigsten Anwendungsgebiete innerhalb der wissenschaftlichen Visualisierung. Im Gegensatz zu OberflĂ€chenmodellen, reprĂ€sentieren solche Daten ein semi-transparentes Medium in einem 3D-Feld. Anwendungen reichen von medizinischen Untersuchungen, Simulation physikalischer Prozesse bis hin zur visuellen Kunst. Viele dieser Anwendungen verlangen InteraktivitĂ€t hinsichtlich Darstellungs- und Visualisierungsparameter. Der Ray-Tracing- (Stahlverfolgungs-) Algorithmus wurde dabei, obwohl er inhĂ€rent die Interaktion mit einem solchen Medium simulieren kann, immer als zu langsam angesehen. Die meisten Forscher konzentrierten sich vielmehr auf RasterisierungsansĂ€tze, da diese besser fĂŒr Grafikkarten geeignet sind. Dabei leiden diese AnsĂ€tze entweder unter einer ungenĂŒgenden QualitĂ€t respektive FlexibilitĂ€t. Die andere Alternative besteht darin, den Ray-Tracing-Algorithmus so zu beschleunigen, dass er sinnvoll fĂŒr Visualisierungsanwendungen benutzt werden kann. Seit der VerfĂŒgbarkeit moderner Grafikkarten hat die Forschung auf diesem Gebiet nachgelassen, obwohl selbst moderne GPUs immer noch Limitierungen, wie beispielsweise der begrenzte Grafikkartenspeicher oder das umstĂ€ndliche Programmiermodell, enthalten. Die beiden in dieser Arbeit vorgestellten Methoden sind deshalb vollstĂ€ndig softwarebasiert, da es sinnvoller erscheint, möglichst viele Optimierungen in Software zu realisieren, bevor eine Portierung auf Hardware erfolgt. Die erste Methode wird impliziter Kd-Baum genannt, eine hierarchische und rĂ€umliche Beschleunigungstruktur, die ursprĂŒnglich fĂŒr die Generierung von IsoflĂ€chen regulĂ€re GitterdatensĂ€tze entwickelt wurde. In der Zwischenzeit unterstĂŒtzt sie auch die semi-transparente Darstellung, die Darstellung von zeitabhĂ€ngigen DatensĂ€tzen und wurde erfolgreich fĂŒr andere Anwendungen eingesetzt. Der zweite Algorithmus benutzt so genannte PlĂŒcker-Koordinaten, welche die Implementierung eines schnellen inkrementellen Traversierers fĂŒr DatensĂ€tze erlauben, deren Primitive Tetraeder beziehungsweise Hexaeder sind. Beide Algorithmen wurden wesentlich optimiert, um eine interaktive Bildgenerierung volumetrischer Daten zu ermöglichen und stellen deshalb einen wichtigen Beitrag hin zu einem flexiblen und interaktiven Volumen-Ray-Tracing-System dar.Volume rendering is one of the most demanding and interesting topics among scientific visualization. Applications include medical examinations, simulation of physical processes, and visual art. Most of these applications demand interactivity with respect to the viewing and visualization parameters. The ray tracing algorithm, although inherently simulating light interaction with participating media, was always considered too slow. Instead, most researchers followed object-order algorithms better suited for graphics adapters, although such approaches often suffer either from low quality or lack of flexibility. Another alternative is to speed up the ray tracing algorithm to make it competitive for volumetric visualization tasks. Since the advent of modern graphic adapters, research in this area had somehow ceased, although some limitations of GPUs, e.g. limited graphics board memory and tedious programming model, are still a problem. The two methods discussed in this thesis are therefore purely software-based since it is believed that software implementations allow for a far better optimization process before porting algorithms to hardware. The first method is called implicit kd-tree, which is a hierarchical spatial acceleration structure originally developed for iso-surface rendering of regular data sets that now supports semi-transparent rendering, time-dependent data visualization, and is even used in non volume-rendering applications. The second algorithm uses so-called PlĂŒcker coordinates, providing a fast incremental traversal for data sets consisting of tetrahedral or hexahedral primitives. Both algorithms are highly optimized to support interactive rendering of volumetric data sets and are therefore major contributions towards a flexible and interactive volume ray tracing framework
    corecore