29 research outputs found

    An asynchronous method for cloud-based rendering

    Get PDF
    Interactive high-fidelity rendering is still unachievable on many consumer devices. Cloud gaming services have shown promise in delivering interactive graphics beyond the individual capabilities of user devices. However, a number of shortcomings are manifest in these systems: high network bandwidths are required for higher resolutions and input lag due to network fluctuations heavily disrupts user experience. In this paper, we present a scalable solution for interactive high-fidelity graphics based on a distributed rendering pipeline where direct lighting is computed on the client device and indirect lighting in the cloud. The client device keeps a local cache for indirect lighting which is asynchronously updated using an object space representation; this allows us to achieve interactive rates that are unconstrained by network performance for a wide range of display resolutions that are also robust to input lag. Furthermore, in multi-user environments, the computation of indirect lighting is amortised over participating clients

    Real-time Global Illumination by Simulating Photon Mapping

    Get PDF

    Model-based camera tracking for augmented reality

    Get PDF
    Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2014.Thesis (Master's) -- Bilkent University, 2014.Includes bibliographical references leaves 45-49.Augmented reality (AR) is the enhancement of real scenes with virtual entities. It is used to enhance user experience and interaction in various ways. Educational applications, architectural visualizations, military training scenarios and pure entertainment-based applications are often enhanced by augmented reality to provide more immersive and interactive experience for the users. With hand-held devices getting more powerful and cheap, such applications are becoming very popular. To provide natural AR experiences, extrinsic camera parameters (position and rotation) must be calculated in an accurate, robust and efficient way so that virtual entities can be overlaid onto the real environments correctly. Estimating extrinsic camera parameters in real-time is a challenging task. In most camera tracking frameworks, visual tracking serve as the main method for estimating the camera pose. In visual tracking systems, keypoint and edge features are often used for pose estimation. For rich-textured environments, keypoint-based methods work quite well and heavily used. Edge-based tracking, on the other hand, is more preferable when the environment is rich in geometry but has little or no visible texture. Pose estimation for edge based tracking systems generally depends on the control points that are assigned on the model edges. For accurate tracking, visibility of these control points must be determined in a correct manner. Control point visibility determination is computationally expensive process. We propose a method to reduce computational cost of the edge-based tracking by preprocessing the visibility information of the control points. For that purpose, we use persistent control points which are generated in the world space during preprocessing step. Additionally, we use more accurate adaptive projection algorithm for persistent control points to provide more uniform control point distribution in the screen space. We test our camera tracker in different environments to show the effectiveness and performance of the proposed algorithm. The preprocessed visibility information enables constant time calculations of control point visibility while preserving the accuracy of the tracker. We demonstrate a sample AR application with user interaction to present our AR framework, which is developed for a commercially available and widely used game engine.Aman, AytekM.S

    Brain mapping with EEG signals

    Get PDF

    Applied Visualization in the Neurosciences and the Enhancement of Visualization through Computer Graphics

    Get PDF
    The complexity and size of measured and simulated data in many fields of science is increasing constantly. The technical evolution allows for capturing smaller features and more complex structures in the data. To make this data accessible by the scientists, efficient and specialized visualization techniques are required. Maximum efficiency and value for the user can only be achieved by adapting visualization to the specific application area and the specific requirements of the scientific field. Part I: In the first part of my work, I address the visualization in the neurosciences. The neuroscience tries to understand the human brain; beginning at its smallest parts, up to its global infrastructure. To achieve this ambitious goal, the neuroscience uses a combination of three-dimensional data from a myriad of sources, like MRI, CT, or functional MRI. To handle this diversity of different data types and sources, the neuroscience need specialized and well evaluated visualization techniques. As a start, I will introduce an extensive software called \"OpenWalnut\". It forms the common base for developing and using visualization techniques with our neuroscientific collaborators. Using OpenWalnut, standard and novel visualization approaches are available to the neuroscientific researchers too. Afterwards, I am introducing a very specialized method to illustrate the causal relation of brain areas, which was, prior to that, only representable via abstract graph models. I will finalize the first part of my work with an evaluation of several standard visualization techniques in the context of simulated electrical fields in the brain. The goal of this evaluation was clarify the advantages and disadvantages of the used visualization techniques to the neuroscientific community. We exemplified these, using clinically relevant scenarios. Part II: Besides the data preprocessing, which plays a tremendous role in visualization, the final graphical representation of the data is essential to understand structure and features in the data. The graphical representation of data can be seen as the interface between the data and the human mind. The second part of my work is focused on the improvement of structural and spatial perception of visualization -- the improvement of the interface. Unfortunately, visual improvements using computer graphics methods of the computer game industry is often seen sceptically. In the second part, I will show that such methods can be applied to existing visualization techniques to improve spatiality and to emphasize structural details in the data. I will use a computer graphics paradigm called \"screen space rendering\". Its advantage, amongst others, is its seamless applicability to nearly every visualization technique. I will start with two methods that improve the perception of mesh-like structures on arbitrary surfaces. Those mesh structures represent second-order tensors and are generated by a method named \"TensorMesh\". Afterwards I show a novel approach to optimally shade line and point data renderings. With this technique it is possible for the first time to emphasize local details and global, spatial relations in dense line and point data.In vielen Bereichen der Wissenschaft nimmt die Größe und Komplexität von gemessenen und simulierten Daten zu. Die technische Entwicklung erlaubt das Erfassen immer kleinerer Strukturen und komplexerer Sachverhalte. Um solche Daten dem Menschen zugänglich zu machen, benötigt man effiziente und spezialisierte Visualisierungswerkzeuge. Nur die Anpassung der Visualisierung auf ein Anwendungsgebiet und dessen Anforderungen erlaubt maximale Effizienz und Nutzen für den Anwender. Teil I: Im ersten Teil meiner Arbeit befasse ich mich mit der Visualisierung im Bereich der Neurowissenschaften. Ihr Ziel ist es, das menschliche Gehirn zu begreifen; von seinen kleinsten Teilen bis hin zu seiner Gesamtstruktur. Um dieses ehrgeizige Ziel zu erreichen nutzt die Neurowissenschaft vor allem kombinierte, dreidimensionale Daten aus vielzähligen Quellen, wie MRT, CT oder funktionalem MRT. Um mit dieser Vielfalt umgehen zu können, benötigt man in der Neurowissenschaft vor allem spezialisierte und evaluierte Visualisierungsmethoden. Zunächst stelle ich ein umfangreiches Softwareprojekt namens \"OpenWalnut\" vor. Es bildet die gemeinsame Basis für die Entwicklung und Nutzung von Visualisierungstechniken mit unseren neurowissenschaftlichen Kollaborationspartnern. Auf dieser Basis sind klassische und neu entwickelte Visualisierungen auch für Neurowissenschaftler zugänglich. Anschließend stelle ich ein spezialisiertes Visualisierungsverfahren vor, welches es ermöglicht, den kausalen Zusammenhang zwischen Gehirnarealen zu illustrieren. Das war vorher nur durch abstrakte Graphenmodelle möglich. Den ersten Teil der Arbeit schließe ich mit einer Evaluation verschiedener Standardmethoden unter dem Blickwinkel simulierter elektrischer Felder im Gehirn ab. Das Ziel dieser Evaluation war es, der neurowissenschaftlichen Gemeinde die Vor- und Nachteile bestimmter Techniken zu verdeutlichen und anhand klinisch relevanter Fälle zu erläutern. Teil II: Neben der eigentlichen Datenvorverarbeitung, welche in der Visualisierung eine enorme Rolle spielt, ist die grafische Darstellung essenziell für das Verständnis der Strukturen und Bestandteile in den Daten. Die grafische Repräsentation von Daten bildet die Schnittstelle zum Gehirn des Menschen. Der zweite Teile meiner Arbeit befasst sich mit der Verbesserung der strukturellen und räumlichen Wahrnehmung in Visualisierungsverfahren -- mit der Verbesserung der Schnittstelle. Leider werden viele visuelle Verbesserungen durch Computergrafikmethoden der Spieleindustrie mit Argwohn beäugt. Im zweiten Teil meiner Arbeit werde ich zeigen, dass solche Methoden in der Visualisierung angewendet werden können um den räumlichen Eindruck zu verbessern und Strukturen in den Daten hervorzuheben. Dazu nutze ich ein in der Computergrafik bekanntes Paradigma: das \"Screen Space Rendering\". Dieses Paradigma hat den Vorteil, dass es auf nahezu jede existierende Visualiserungsmethode als Nachbearbeitunsgschritt angewendet werden kann. Zunächst führe ich zwei Methoden ein, die die Wahrnehmung von gitterartigen Strukturen auf beliebigen Oberflächen verbessern. Diese Gitter repräsentieren die Struktur von Tensoren zweiter Ordnung und wurden durch eine Methode namens \"TensorMesh\" erzeugt. Anschließend zeige ich eine neuartige Technik für die optimale Schattierung von Linien und Punktdaten. Mit dieser Technik ist es erstmals möglich sowohl lokale Details als auch globale räumliche Zusammenhänge in dichten Linien- und Punktdaten zu erfassen

    Dynamic data structures and saliency-influenced rendering

    Get PDF
    With increasing heterogeneity of modern hardware, different requirements for 3d applications arise. Despite the fact that real-time rendering of photo-realistic images is possible using today’s graphics cards, still large computational effort is required. Furthermore, smart-phones or computers with older, less powerful graphics cards may not be able to reproduce these results. To retain interactive rendering, usually the detail of a scene is reduced, and so less data needs to be processed. This removal of data, however, may introduce errors, so called artifacts. These artifacts may be distracting for a human spectator when gazing at the display. Thus, the visual quality of the presented scene is reduced. This is counteracted by identifying features of an object that can be removed without introducing artifacts. Most methods utilize geometrical properties, such as distance or shape, to rate the quality of the performed reduction. This information used to generate so called Levels Of Detail (LODs), which are made available to the rendering system. This reduces the detail of an object using the precalculated LODs, e.g. when it is moved into the back of the scene. The appropriate LOD is selected using a metric, and it is replaced with the current displayed version. This exchange must be made smoothly, requiring both LOD-versions to be drawn simultaneously during a transition. Otherwise, this exchange will introduce discontinuities, which are easily discovered by a human spectator. After completion of the transition, only the newly introduced LOD-version is drawn and the previous overhead removed. These LOD-methods usually operate with discrete levels and exploit limitations of both the display and the spectator: the human. Humans are limited in their vision. This ranges from being unable to distinct colors at varying illumination scenarios to the limitation to focus only at one location at a time. Researchers have developed many applications to exploit these limitations to increase the quality of an applied compression. Some popular methods of vision-based compression are MPEG or JPEG. For example, a JPEG compression exploits the reduced sensitivity of humans regarding color and so encodes colors with a lower resolution. Also, other fields, such as auditive perception, allow the exploitation of human limitations. The MP3 compression, for example, reduces the quality of stored frequencies if other frequencies are masking it. For representation of perception various computer models exist. In our rendering scenario, a model is advantageous that cannot be influenced by a human spectator, such as the visual salience or saliency. Saliency is a notion from psycho-physics that determines how an object “pops out” of its surrounding. These outstanding objects (or features) are important for the human vision and are directly evaluated by our Human Visual System (HVS). Saliency combines multiple parts of the HVS and allows an identification of regions where humans are likely to look at. In applications, saliency-based methods have been used to control recursive or progressive rendering methods. Especially expensive display methods, such as pathtracing or global illumination calculations, benefit from a perceptual representation as recursions or calculations can be aborted if only small or unperceivable errors are expected to occur. Yet, saliency is commonly applied to 2d images, and an extension towards 3d objects has only partially been presented. Some issues need to be addressed to accomplish a complete transfer. In this work, we present a smart rendering system that not only utilizes a 3d visual salience model but also applies the reduction in detail directly during rendering. As opposed to normal LOD-methods, this detail reduction is not limited to a predefined set of levels, but rather a dynamic and continuous LOD is created. Furthermore, to apply this reduction in a human-oriented way, a universal function to compute saliency of a 3d object is presented. The definition of this function allows to precalculate and store object-related visual salience information. This stored data is then applicable in any illumination scenario and allows to identify regions of interest on the surface of a 3d object. Unlike preprocessed methods, which generate a view-independent LOD, this identification includes information of the scene as well. Thus, we are able to define a perception-based, view-specific LOD. Performance measures of a prototypical implementation on computers with modern graphic cards achieved interactive frame rates, and several tests have proven the validity of the reduction. The adaptation of an object is performed with a dynamic data structure, the TreeCut. It is designed to operate on hierarchical representations, which define a multi-resolution object. In such a hierarchy, the leaf nodes contain the highest detail while inner nodes are approximations of their respective subtree. As opposed to classical hierarchical rendering methods, a cut is stored and re-traversal of a tree during rendering is avoided. Due to the explicit cut representation, the TreeCut can be altered using only two core operations: refine and coarse. The refine-operation increases detail by replacing a node of the tree with its children while the coarse-operation removes the node along with its siblings and replaces them with their parent node. These operations do not rely on external information and can be performed in a local manner. These only require direct successor or predecessor information. Different strategies to evolve the TreeCut are presented, which adapt the representation using only information given by the current cut. These evaluate the cut by assigning either a priority or a target-level (or bucket) to each cut-node. The former is modelled as an optimization problem that increases the average priority of a cut while being restricted in some way, e.g. in size. The latter evolves the cut to match a certain distribution. This is applied in cases where a prioritization of nodes is not applicable. Both evaluation strategies operate with linear time complexity with respect to the size of the current TreeCut. The data layout is chosen to separate rendering data and hierarchy to enable multi-threaded evaluation and display. The object is adapted over multiple frames while the rendering is not interrupted by the used evaluation strategy. Therefore, we separate the representation of the hierarchy from the rendering data. Due to its design, this overhead imposed to the TreeCut data structure does not influence rendering performance, and a linear time complexity for rendering is retained. The TreeCut is not only limited to alter geometrical detail of an object. The TreeCut has successfully been applied to create a non-photo-realistic stippling display, which draws the object with equal sized points in varying density. In this case the bucket-based evaluation strategy is utilized, which determines the distribution of the cut based on local illumination information. As an alternative, an attention drawing mechanism is proposed, which applies the TreeCut evaluation strategies to define the display style of a notification icon. A combination of external priorities is used to derive the appropriate icon version. An application for this mechanism is a messaging system that accounts for the current user situation. When optimizing an object or scene, perceptual methods allow to account for or exploit human limitations. Therefore, visual salience approaches derive a saliency map, which encodes regions of interest in a 2d map. Rendering algorithms extract importance from such a map and adapt the rendering accordingly, e.g. abort a recursion when the current location is unsalient. The visual salience depends on multiple factors including the view and the illumination of the scene. We extend the existing definition of the 2d saliency and propose a universal function for 3d visual salience: the Bidirectional Saliency Weight Distribution Function (BSWDF). Instead of extracting the saliency from 2d image and approximate 3d information, we directly compute this information using the 3d data. We derive a list of equivalent features for the 3d scenario and add them to the BSWDF. As the BSWDF is universal, also 2d images are covered with the BSWDF, and the calculation of the important regions within images is possible. To extract the individual features that contribute to visual salience, capabilities of modern graphics card in combination with an accumulation method for rendering is utilized. Inspired from point-based rendering methods local features are summed up in a single surface element (surfel) and are compared with their surround to determine whether they “pop out”. These operations are performed with a shader-program that is executed on the Graphics Processing Unit (GPU) and has direct access to the 3d data. This increases processing speed because no transfer of the data is required. After computation, each of these object-specific features can be combined to derive a saliency map for this object. Surface specific information, e.g. color or curvature, can be preprocessed and stored onto disk. We define a sampling scheme to determine the views that need to be evaluated for each object. With these schemes, the features can be interpolated for any view that occurs during rendering, and the according surface data is reconstructed. These sampling schemes compose a set of images in form of a lookup table. This is similar to existing rendering techniques, which extract illumination information from a lookup. The size of the lookup table increases only with the number of samples or the image size used for creation as the images are of equal size. Thus, the quality of the saliency data is independent of the object’s geometrical complexity. The computation of a BSWDF can be performed either on a Central Processing Unit (CPU) or a GPU, and an implementation requires only a few instructions when using a shader program. If the surface features have been stored during a preprocess, a reprojection of the data is performed and combined with the current information of the object. Once the data is available, the computation of the saliency values is done using a specialized illumination model, and a priority for each primitive is extracted. If the GPU is used, the calculated data has to be transferred from the graphics card. We therefore use the “transform feedback” capabilities, which allow high transfer rates and preserve the order of processed primitives. So, an identification of regions of interest based on the currently used primitives is achieved. The TreeCut evaluation strategies are then able to optimize the representation in an perception-based manner. As the adaptation utilizes information of the current scene, each change to an object can result in new visual salience information. So, a self-optimizing system is defined: the Feedback System. The output generated by this system converges towards a perception-optimized solution. To proof the saliency information to be useful, user tests have been performed with the results generated by the proposed Feedback System. We compared a saliency-enhanced object compression to a pure geometrical approach, common for LOD-generation. One result of the tests is that saliency information allows to increase compression even further as possible with the pure geometrical methods. The participants were not able to distinguish between objects even if the saliency-based compression had only 60% of the size of the geometrical reduced object. If the size ratio is greater, saliency-based compression is rated, on average, with higher score and these results have a high significance using statistical tests. The Feedback System extends an 3d object with the capability of self-optimization. Not only geometrical detail but also other properties can be limited and optimized using the TreeCut in combination with a BSWDF. We present a dynamic animation, which utilizes a Software Development Kit (SDK) for physical simulations. This was chosen, on the one hand, to show the universal applicability of the proposed system, and on the other hand, to focus on the connection between the TreeCut and the SDK. We adapt the existing framework, and include the SDK within our design. In this case, the TreeCut-operations not only alter geometrical but also simulation detail. This increases calculation performance because both the rendering and the SDK operate on less data after the reduction has been completed. The selected simulation type is a soft-body simulation. Soft-bodies are deformable in a certain degree but retain their internal connection. An example is a piece of cloth that smoothly fits the underlying surface without tearing apart. Other types are rigid bodies, i.e. idealistic objects that cannot be deformed, and fluids or gaseous materials, which are well suited for point-based simulations. Any of these simulations scales with the number of simulation nodes used, and a reduction of detail increases performance significantly. We define a specialized BSWDF to evaluate simulation specific features, such as motion. The Feedback System then increases detail in highly salient regions, e.g. those with large motion, and saves computation time by reducing detail in static parts of the simulation. So, detail of the simulation is preserved while less nodes are simulated. The incorporation of perception in real-time rendering is an important part of recent research. Today, the HVS is well understood, and valid computer models have been derived. These models are frequently used in commercial and free software, e.g. JPEG compression. Within this thesis, the Tree-Cut is presented to change the LOD of an object in a dynamic and continuous manner. No definition of the individual levels in advance is required, and the transitions are performed locally. Furthermore, in combination with an identification of important regions by the BSWDF, a perceptual evaluation of a 3d object is achieved. As opposed to existing methods, which approximate data from 2d images, the perceptual information is directly acquired from 3d data. Some of this data can be preprocessed if necessary, to defer additional computations during rendering. The Feedback System, created by the TreeCut and the BSWDF, optimizes the representation and is not limited to visual data alone. We have shown with our prototype that interactive frame rates can be achieved with modern hardware, and we have proven the validity of the reductions by performing several user tests. However, the presented system only focuses on specific aspects, and more research is required to capture even more capabilities that a perception-based rendering system can provide
    corecore