8 research outputs found

    A Cloud Based Disaster Management System

    Get PDF
    The combination of wireless sensor networks (WSNs) and 3D virtual environments opens a new paradigm for their use in natural disaster management applications. It is important to have a realistic virtual environment based on datasets received from WSNs to prepare a backup rescue scenario with an acceptable response time. This paper describes a complete cloud-based system that collects data from wireless sensor nodes deployed in real environments and then builds a 3D environment in near real-time to reflect the incident detected by sensors (fire, gas leaking, etc.). The system’s purpose is to be used as a training environment for a rescue team to develop various rescue plans before they are applied in real emergency situations. The proposed cloud architecture combines 3D data streaming and sensor data collection to build an efficient network infrastructure that meets the strict network latency requirements for 3D mobile disaster applications. As compared to other existing systems, the proposed system is truly complete. First, it collects data from sensor nodes and then transfers it using an enhanced Routing Protocol for Low-Power and Lossy Networks (RLP). A 3D modular visualizer with a dynamic game engine was also developed in the cloud for near-real time 3D rendering. This is an advantage for highly-complex rendering algorithms and less powerful devices. An Extensible Markup Language (XML) atomic action concept was used to inject 3D scene modifications into the game engine without stopping or restarting the engine. Finally, a multi-objective multiple traveling salesman problem (AHP-MTSP) algorithm is proposed to generate an efficient rescue plan by assigning robots and multiple unmanned aerial vehicles to disaster target locations, while minimizing a set of predefined objectives that depend on the situation. The results demonstrate that immediate feedback obtained from the reconstructed 3D environment can help to investigate what–if scenarios, allowing for the preparation of effective rescue plans with an appropriate management effort.info:eu-repo/semantics/publishedVersio

    3D Mesh Simplification Techniques for Enhanced Image Based Rendering

    Get PDF
    Three dimensional videos and virtual reality applications are gaining wide range of popularity in recent years. Virtual reality creates the feeling of 'being there' and provides more realistic experience than conventional 2D media. In order to feel the immersive experience, it is important to satisfy two important criteria namely, visual quality of the video and timely rendering. However, it is quite impractical to satisfy these goals, especially on low capability devices such as mobile phones. Careful analysis of the depth map and further processing may help in achieving these goals considerably. Advanced developments in the graphics hardware tremendously reduced the time required to render the images to be displayed. However, along with this development, the demand for more realism tend to increase the complexity of the model of the virtual environment. Complex models require millions of primitives which subsequently means millions of polygons to represent it. Wise selection of rendering technique offer one of the ways to reduce the rendering speed. Mesh-based rendering is one of the techniques which enhances the speed of rendering as compared to its counterpart pixel based rendering. However, due to the demand for richer experience, the number of polygons required, always seem to exceed the number of polygons the graphics hardware can efficiently render. In practice, it is not feasible to store large number of polygons because of storage limitations in mobile phone hardware. Furthermore, number of polygons increase the rendering speed, which would necessitate more powerful devices. Mesh simplification techniques offer solution to deal with complex models. These methods simplify unimportant and redundant part of the model which helps in reducing the rendering cost without negatively effecting the visual quality of the scene. Mesh simplification has been extensively studied, however, it is not applied to all the areas. For example, depth is one of the areas where general available simplification methods are not very well suitable as most of the methods do not consider depth discontinuities very well. Moreover, some of the state of the art methods are not capable of handling high resolution depth maps. In this thesis, an attempt is made to address the problem of combining the depth maps with mesh simplification. Aim of the thesis is to reduce the computational cost of rendering by taking the homogeneous and planar areas of the depth map into account, while still maintaining suitable visual quality of the rendered image. Different depth decimation techniques are implemented and compared with the available state of the art methods. We demonstrate that the depth decimation technique which fits the plane to depth area and considers the depth discontinuities, outperforms the state of the art methods clearly

    Real-Time Virtual Viewpoint Generation on the GPU for Scene Navigation

    Full text link

    Raum-Zeit Interpolationstechniken

    Get PDF
    The photo-realistic modeling and animation of complex scenes in 3D requires a lot of work and skill of artists even with modern acquisition techniques. This is especially true if the rendering should additionally be performed in real-time. In this thesis we follow another direction in computer graphics to generate photo-realistic results based on recorded video sequences of one or multiple cameras. We propose several methods to handle scenes showing natural phenomena and also multi-view footage of general complex 3D scenes. In contrast to other approaches, we make use of relaxed geometric constraints and focus especially on image properties important to create perceptually plausible in-between images. The results are novel photo-realistic video sequences rendered in real-time allowing for interactive manipulation or to interactively explore novel view and time points.Das Modellieren und die Animation von 3D Szenen in fotorealistischer Qualität ist sehr arbeitsaufwändig, auch wenn moderne Verfahren benutzt werden. Wenn die Bilder in Echtzeit berechnet werden sollen ist diese Aufgabe um so schwieriger zu lösen. In dieser Dissertation verfolgen wir einen alternativen Ansatz der Computergrafik, um neue photorealistische Ergebnisse aus einer oder mehreren aufgenommenen Videosequenzen zu gewinnen. Es werden mehrere Methoden entwickelt die für natürlicher Phänomene und für generelle Szenen einsetzbar sind. Im Unterschied zu anderen Verfahren nutzen wir abgeschwächte geometrische Einschränkungen und berechnen eine genaue Lösung nur dort wo sie wichtig für die menschliche Wahrnehmung ist. Die Ergebnisse sind neue fotorealistische Videosequenzen, die in Echtzeit berechnet und interaktiv manipuliert, oder in denen neue Blick- und Zeitpunkte der Szenen frei erkundet werden können

    Rendering from unstructured collections of images

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.Includes bibliographical references (p. 157-163).Computer graphics researchers recently have turned to image-based rendering to achieve the goal of photorealistic graphics. Instead of constructing a scene with millions of polygons, the scene is represented by a collection of photographs along with a greatly simplified geometric model. This simple representation allows traditional light transport simulations to be replaced with basic image-processing routines that combine multiple images together to produce never-before-seen images from new vantage points. This thesis presents a new image-based rendering algorithm called unstructured lumigraph rendering (ULR). ULR is an image-based rendering algorithm that is specifically designed to work with unstructured (i.e., irregularly arranged) collections of images. The algorithm is unique in that it is capable of using any amount of geometric or image information that is available about a scene. Specifically, the research in this thesis makes the following contributions: * An enumeration of image-based rendering properties that an ideal algorithm should attempt to satisfy. An algorithm that satisfies these properties should work as well as possible with any configuration of input images or geometric knowledge. * An optimal formulation of the basic image-based rendering problem, the solution to which is designed to satisfy the aforementioned properties. * The unstructured lumigraph rendering algorithm, which is an efficient approximation to the optimal image-based rendering solution. * A non-metric ULR algorithm, which generalizes the basic ULR algorithm to work with uncalibrated images. * A time-dependent ULR algorithm, which generalizes the basic ULR algorithm to work with time-dependent data.by Christopher James Buehler.Ph.D

    Ein Beitrag zur Entwicklung von Methoden zur Stereoanalyse und Bildsynthese im Anwendungskontext der Videokommunikation

    Get PDF
    This thesis contributes to the research area of stereo vision and view synthesis in the field of private video communication. During private video communication eye contact between the participants is typically lost due to the different placement of the camera and the video window. The goal of this thesis is to re-establish the eye contact by synthesizing of the view of a virtual camera such that the virtual camera faces towards the participant. The thesis firstly sketches the positive effect of eye contact in video communication. An in-depth review of mathematical foundations in the fields of stereo vision and view synthesis follows. On this foundation the thesis comprehensively covers the state of the art of image based rendering and particularly of eye-gaze correction via 3D-analysis and synthesis.In the first step of the method development the thesis establishes a model of quality factors which determines decisions about camera placement and recording system. Measurements with respect to synchronization and data storage are presented. Local and global algorithms for stereo vision are analyzed and adapted. The thesis contributes to the field of stereo vision algorithms by means of development and combination of different cost functions, consistency based inpainting, spatial and temporal smoothing and segmentation with respect to the use case of private video communication. Using the extracted disparity map, two approaches for view synthesis - trifocal transfer and 3D warping - are employed and extended. One important contribution of the thesis is a contour-based inpainting algorithm as well as point base image smoothing techniques. Two comprehensive subjective studies prove the assumption that eye contact can be re-established by the proposed system. They demonstrate the well perceived eye-contact as well as the significantly improved acceptance of quality due to the developed methods compared to the initial situation. The thesis finally discusses the results, followed by a qualitative comparison to the state of the art.Die vorliegende Arbeit leistet einen Beitrag zum Forschungsbereich der Stereoanalyse und Bildsynthese im speziellen Kontext der privaten Videokommunikation. Bei der privaten Videokommunikation geht durch die unterschiedliche Positionierung der Kamera und des Videofensters typischerweise der Blickkontakt zwischen den Kommunikationsteilnehmern verloren. Ziel dieser Arbeit ist die Wiederherstellung des Blickkontaktes mittels der Synthese einer virtuellen Kameraansicht, die in Blickrichtung der Kommunizierenden ausgerichtet ist. Die Arbeit umreißt zunächst den positiven Einfluss des Blickkontaktes in der Videokommunikation. Anschließend wird eine tiefgehende Betrachtung der notwendigen technischen Grundlagen im Bereich Stereoanalyse und Bildsynthese durchgeführt. Aufbauend auf diesen Grundlagen wird der der Stand der Technik im Bereich des bildbasierten Renderings im Allgemeinen sowie der Blickkorrektur mittels 3D-Analyse und -synthese im Speziellen umfassend behandelt. Zunächst wird ein Modell von Qualitätsparametern entwickelt, welches die Entscheidungen hinsichtlich Kameraanordnung und Aufnahmesystem determiniert. Notwendige Messungen hinsichtlich Synchronizität und Datenspeicherung werden präsentiert. Im Bereich der Algorithmen der Stereoanalyse werden etablierte lokale und globale Algorithmen analysiert und adaptiert. Verschiedene Kostenmaße, konsistenzbasiertes Füllen, zeitliche und örtliche Glättung sowie eine abschließende Segmentierung werden hinsichtlich des konkreten Anwendungsfalls der Blickkorrektur in der privaten Videokommunikation entwickelt. Darauf aufbauend werden die beiden Syntheseverfahren des trifokalen Transfers sowie des 3D-Warpings weiter entwickelt. Ein wichtiger Beitrag der Arbeit ist ein konturbasiertes Füllverfahren sowie Maßnahmen im Bereich der Punktglättung. Zwei umfangreiche Experimente mit zahlreichen Probanden bestätigen die Korrektheit der Annahme, dass Blickkontakt durch das vorgestellte Verfahren hergestellt werden kann. Sie demonstrieren sowohl die sehr gute Wahrnehmung des Augenkontaktes als auch die signifikante Verbesserung der Akzeptanz und subjektiven Qualitätswahrnehmung durch die entwickelten Algorithmen im Vergleich zum Ausgangspunkt der Arbeit. Eine qualitativer Vergleich mit dem Stand der Technik und eine Diskussion der Ergebnisse, gepaart mit einem Ausblick in die Zukunft des behandelten Forschungsgebietes, schließen die Arbeit ab

    Image interpolation by joint view triangulation

    No full text
    corecore