28 research outputs found

    GPGPU-Enabled Physics Based Deformed Model Simulation

    Get PDF
    Computer simulation techniques are widely adopted nowadays in many areas like manufacturing, engineering, graphics, animation, virtual reality and so on. However, the standard finite element based simulation is notorious for its expensive computation. To address this challenge, I present a GPU-based parallel implementation for simulating large elastic deformation. Classic modal analysis provides a set of orthonormal bases vectors, which span a spectral space encoding the dynamics of the elastic body. As each basis vector is orthogonal to each other, the computation is completely decoupled and can be well-fit into the modern GPGPU platform. We further explore the latest feature of NVIDIA CUDA so that the result of GPU computation can be directly used for upcoming rendering/visualization and a significant amount of overheads for transmitting data from client GPU and host CPU via the PCI-Express bus are avoided. Real-time simulation is made possible with this technique for many cases that otherwise is not possible

    Reducing redundancy of real time computer graphics in mobile systems

    Get PDF
    The goal of this thesis is to propose novel and effective techniques to eliminate redundant computations that waste energy and are performed in real-time computer graphics applications, with special focus on mobile GPU micro-architecture. Improving the energy-efficiency of CPU/GPU systems is not only key to enlarge their battery life, but also allows to increase their performance because, to avoid overheating above thermal limits, SoCs tend to be throttled when the load is high for a large period of time. Prior studies pointed out that the CPU and especially the GPU are the principal energy consumers in the graphics subsystem, being the off-chip main memory accesses and the processors inside the GPU the primary energy consumers of the graphics subsystem. First, we focus on reducing redundant fragment processing computations by means of improving the culling of hidden surfaces. During real-time graphics rendering, objects are processed by the GPU in the order they are submitted by the CPU, and occluded surfaces are often processed even though they will end up not being part of the final image. When the GPU realizes that an object or part of it is not going to be visible, all activity required to compute its color and store it has already been performed. We propose a novel architectural technique for mobile GPUs, Visibility Rendering Order (VRO), which reorders objects front-to-back entirely in hardware to maximize the culling effectiveness of the GPU and minimize overshading, hence reducing execution time and energy consumption. VRO exploits the fact that the objects in graphics animated applications tend to keep its relative depth order across consecutive frames (temporal coherence) to provide the feeling of smooth transition. VRO keeps visibility information of a frame, and uses it to reorder the objects of the following frame. VRO just requires adding a small hardware to capture the visibility information and use it later to guide the rendering of the following frame. Moreover, VRO works in parallel with the graphics pipeline, so negligible performance overheads are incurred. We illustrate the benefits of VRO using various unmodified commercial 3D applications for which VRO achieves 27% speed-up and 14.8% energy reduction on average. Then, we focus on avoiding redundant computations related to CPU Collision Detection (CD). Graphics applications such as 3D games represent a large percentage of downloaded applications for mobile devices and the trend is towards more complex and realistic scenes with accurate 3D physics simulations. CD is one of the most important algorithms in any physics kernel since it identifies the contact points between the objects of a scene and determines when they collide. However, real-time accurate CD is very expensive in terms of energy consumption. We propose Render Based Collision Detection (RBCD), a novel energy-efficient high-fidelity CD scheme that leverages some intermediate results of the rendering pipeline to perform CD, so that redundant tasks are done just once. Comparing RBCD with a conventional CD completely executed in the CPU, we show that its execution time is reduced by almost three orders of magnitude (600x speedup), because most of the CD task of our model comes for free by reusing the image rendering intermediate results. Although not necessarily, such a dramatic time improvement may result in better frames per second if physics simulation stays in the critical path. However, the most important advantage of our technique is the enormous energy savings that result from eliminating a long and costly CPU computation and converting it into a few simple operations executed by a specialized hardware within the GPU. Our results show that the energy consumed by CD is reduced on average by a factor of 448x (i.e., by 99.8\%). These dramatic benefits are accompanied by a higher fidelity CD analysis (i.e., with finer granularity), which improves the quality and realism of the application.El objetivo de esta tesis es proponer técnicas efectivas y originales para eliminar computaciones inútiles que aparecen en aplicaciones gráficas, con especial énfasis en micro-arquitectura de GPUs. Mejorar la eficiencia energética de los sistemas CPU/GPU no es solo clave para alargar la vida de la batería, sino también incrementar su rendimiento. Estudios previos han apuntado que la CPU y especialmente la GPU son los principales consumidores de energía en el sub-sistema gráfico, siendo los accesos a memoria off-chip y los procesadores dentro de la GPU los principales consumidores de energía del sub-sistema gráfico. Primero, nos hemos centrado en reducir computaciones redundantes de la fase de fragment processing mediante la mejora en la eliminación de superficies ocultas. Durante el renderizado de gráficos en tiempo real, los objetos son procesados por la GPU en el orden en el que son enviados por la CPU, y las superficies ocultas son a menudo procesadas incluso si no no acaban formando parte de la imagen final. Cuando la GPU averigua que el objeto o parte de él no es visible, toda la actividad requerida para computar su color y guardarlo ha sido realizada. Proponemos una técnica arquitectónica original para GPUs móviles, Visibility Rendering Order (VRO), la cual reordena los objetos de delante hacia atrás por completo en hardware para maximizar la efectividad del culling de la GPU y así minimizar el overshading, y por lo tanto reducir el tiempo de ejecución y el consumo de energía. VRO explota el hecho de que los objetos de las aplicaciones gráficas animadas tienden a mantener su orden relativo en profundidad a través de frames consecutivos (coherencia temporal) para proveer animaciones con transiciones suaves. Dado que las relaciones de orden en profundidad entre objetos son testeadas en la GPU, VRO introduce costes mínimos en energía. Solo requiere añadir una pequeña unidad hardware para capturar la información de visibilidad. Además, VRO trabaja en paralelo con el pipeline gráfico, por lo que introduce costes insignificantes en tiempo. Ilustramos los beneficios de VRO usango varias aplicaciones 3D comerciales para las cuales VRO consigue un 27% de speed-up y un 14.8% de reducción de energía en media. En segundo lugar, evitamos computaciones redundantes relacionadas con la Detección de Colisiones (CD) en la CPU. Las aplicaciones gráficas animadas como los juegos 3D representan un alto porcentaje de las aplicaciones descargadas en dispositivos móviles y la tendencia es hacia escenas más complejas y realistas con simulaciones físicas 3D precisas. La CD es uno de los algoritmos más importantes entre los kernel de físicas dado que identifica los puntos de contacto entre los objetos de una escena. Sin embargo, una CD en tiempo real y precisa es muy costosa en términos de consumo energético. Proponemos Render Based Collision Detection (RBCD), una técnica energéticamente eficiente y preciso de CD que utiliza resultados intermedios del rendering pipeline para realizar la CD. Comparando RBCD con una CD convencional completamente ejecutada en la CPU, mostramos que el tiempo de ejecución es reducido casi tres órdenes de magnitud (600x speedup), porque la mayoría de la CD de nuestro modelo reusa resultados intermedios del renderizado de la imagen. Aunque no es así necesariamente, esta espectacular en tiempo puede resultar en mejores frames por segundo si la simulación de físicas está en el camino crítico. Sin embargo, la ventaja más importante de nuestra técnica es el enorme ahorro de energía que resulta de eliminar las largas y costosas computaciones en la CPU, sustituyéndolas por unas pocas operaciones ejecutadas en un hardware especializado dentro de la GPU. Nuestros resultados muestran que la energía consumida por la CD es reducidad en media por un factor de 448x. Estos dramáticos beneficios vienen acompañados de una mayor fidelidad en la CD (i.e. con granularidad más fina)Postprint (published version

    Dynamic task scheduling and binding for many-core systems through stream rewriting

    Get PDF
    This thesis proposes a novel model of computation, called stream rewriting, for the specification and implementation of highly concurrent applications. Basically, the active tasks of an application and their dependencies are encoded as a token stream, which is iteratively modified by a set of rewriting rules at runtime. In order to estimate the performance and scalability of stream rewriting, a large number of experiments have been evaluated on many-core systems and the task management has been implemented in software and hardware.In dieser Dissertation wurde Stream Rewriting als eine neue Methode entwickelt, um Anwendungen mit einer großen Anzahl von dynamischen Tasks zu beschreiben und effizient zur Laufzeit verwalten zu können. Dabei werden die aktiven Tasks in einem Datenstrom verpackt, der zur Laufzeit durch wiederholtes Suchen und Ersetzen umgeschrieben wird. Um die Performance und Skalierbarkeit zu bestimmen, wurde eine Vielzahl von Experimenten mit Many-Core-Systemen durchgeführt und die Verwaltung von Tasks über Stream Rewriting in Software und Hardware implementiert

    Shader optimization and specialization

    Get PDF
    In the field of real-time graphics for computer games, performance has a significant effect on the player’s enjoyment and immersion. Graphics processing units (GPUs) are hardware accelerators that run small parallelized shader programs to speed up computationally expensive rendering calculations. This thesis examines optimizing shader programs and explores ways in which data patterns on both the CPU and GPU can be analyzed to automatically speed up rendering in games. Initially, the effect of traditional compiler optimizations on shader source-code was explored. Techniques such as loop unrolling or arithmetic reassociation provided speed-ups on several devices, but different GPU hardware responded differently to each set of optimizations. Analyzing execution traces from numerous popular PC games revealed that much of the data passed from CPU-based API calls to GPU-based shaders is either unused, or remains constant. A system was developed to capture this constant data and fold it into the shaders’ source-code. Re-running the game’s rendering code using these specialized shader variants resulted in performance improvements in several commercial games without impacting their visual quality

    Interactive web-based visualization

    Get PDF
    The visualization of large amounts of data, which cannot be easily copied for processing on a user’s local machine, is not yet a fully solved problem. Remote visualization represents one possible solution approach to the problem, and has long been an important research topic. Depending on the device used, modern hardware, such as high-performance GPUs, is sometimes not available. This is another reason for the use of remote visualization. Additionally, due to the growing global networking and collaboration among research groups, collaborative remote visualization solutions are becoming more important. The additional use of collaborative visualization solutions is eventually due to the growing global networking and collaboration among research groups. The attractiveness of web-based remote visualization is greatly increased by the wide availability of web browsers on almost all devices; these are available today on all systems - from desktop computers to smartphones. In order to ensure interactivity, network bandwidth and latency are the biggest challenges that web-based visualization algorithms have to solve. Despite the steady improvements in available bandwidth, these improvements are still significantly slower than, for example, processor performance, resulting in increasing the impact of this bottleneck. For example, visualization of large dynamic data in low-bandwidth environments can be challenging because it requires continuous data transfer. However, bandwidth improvement alone cannot improve the latency because it is also affected by factors such as the distance between server and client and network utilization. To overcome these challenges, a combination of techniques is needed to customize the individual processing steps of the visualization pipeline, from efficient data representation to hardware-accelerated rendering on the client side. This thesis first deals with related work in the field of remote visualization with a particular focus on interactive web-based visualization and then presents techniques for interactive visualization in the browser using modern web standards such as WebGL and HTML5. These techniques enable the visualization of dynamic molecular data sets with more than one million atoms at interactive frame rates using GPU-based ray casting. Due to the limitations which exist in a browser-based environment, the concrete implementation of the GPU-based ray casting had to be customized. Evaluation of the resulting performance shows that GPU-based techniques enable the interactive rendering of large data sets and achieve higher image quality compared to polygon-based techniques. In order to reduce data transfer times and network latency, and improve rendering speed, efficient approaches for data representation and transmission are used. Furthermore, this thesis introduces a GPU-based volume-ray marching technique based on WebGL 2.0, which uses progressive brick-wise data transfer, as well as multiple levels of detail in order to achieve interactive volume rendering of datasets stored on a server. The concepts and results presented in this thesis contribute to the further spread of interactive web-based visualization. The algorithmic and technological advances that have been achieved form a basis for further development of interactive browser-based visualization applications. At the same time, this approach has the potential for enabling future collaborative visualization in the cloud.Die Visualisierung großer Datenmengen, welche nicht ohne Weiteres zur Verarbeitung auf den lokalen Rechner des Anwenders kopiert werden können, ist ein bisher nicht zufriedenstellend gelöstes Problem. Remote-Visualisierung stellt einen möglichen Lösungsansatz dar und ist deshalb seit langem ein relevantes Forschungsthema. Abhängig vom verwendeten Endgerät ist moderne Hardware, wie etwa performante GPUs, teilweise nicht verfügbar. Dies ist ein weiterer Grund für den Einsatz von Remote-Visualisierung. Durch die zunehmende globale Vernetzung und Kollaboration von Forschungsgruppen gewinnt kollaborative Remote-Visualisierung zusätzlich an Bedeutung. Die Attraktivität web-basierter Remote-Visualisierung wird durch die weitreichende Verfügbarkeit von Web-Browsern auf nahezu allen Endgeräten enorm gesteigert; diese sind heutzutage auf allen Systemen - vom Desktop-Computer bis zum Smartphone - vorhanden. Bei der Gewährleistung der Interaktivität sind Bandbreite und Latenz der Netzwerkverbindung die größten Herausforderungen, welche von web-basierten Visualisierungs-Algorithmen gelöst werden müssen. Trotz der stetigen Verbesserungen hinsichtlich der verfügbaren Bandbreite steigt diese signifikant langsamer als beispielsweise die Prozessorleistung, wodurch sich die Auswirkung dieses Flaschenhalses immer weiter verstärkt. So kann beispielsweise die Visualisierung großer dynamischer Daten in Umgebungen mit geringer Bandbreite eine Herausforderung darstellen, da kontinuierlicher Datentransfer benötigt wird. Dennoch kann die alleinige Verbesserung der Bandbreite keine entsprechende Verbesserung der Latenz bewirken, da diese zudem von Faktoren wie der Distanz zwischen Server und Client sowie der Netzwerkauslastung beeinflusst wird. Um diese Herausforderungen zu bewältigen, wird eine Kombination verschiedener Techniken für die Anpassung der einzelnen Verarbeitungsschritte der Visualisierungspipeline benötigt, angefangen bei effizienter Datenrepräsentation bis hin zu hardware-beschleunigtem Rendering auf der Client-Seite. Diese Doktorarbeit befasst sich zunächst mit verwandten Arbeiten auf dem Gebiet der Remote-Visualisierung mit besonderem Fokus auf interaktiver web-basierter Visualisierung und präsentiert danach Techniken für die interaktive Visualisierung im Browser mit Hilfe moderner Web-Standards wie WebGL und HTML5. Diese Techniken ermöglichen die Visualisierung dynamischer molekularer Datensätze mit mehr als einer Million Atomen bei interaktiven Frameraten durch die Verwendung GPU-basierten Raycastings. Aufgrund der Einschränkungen, welche in einer Browser-basierten Umgebung vorliegen, musste die konkrete Implementierung des GPU-basierten Raycastings angepasst werden. Die Evaluation der daraus resultierenden Performanz zeigt, dass GPU-basierte Techniken das interaktive Rendering von großen Datensätzen ermöglichen und eine im Vergleich zu Polygon-basierten Techniken höhere Bildqualität erreichen. Zur Verringerung der Übertragungszeiten, Reduktion der Latenz und Verbesserung der Darstellungsgeschwindigkeit werden effiziente Ansätze zur Datenrepräsentation und übertragung verwendet. Des Weiteren wird in dieser Doktorarbeit eine GPU-basierte Volumen-Ray-Marching-Technik auf Basis von WebGL 2.0 eingeführt, welche progressive blockweise Datenübertragung verwendet, sowie verschiedene Detailgrade, um ein interaktives Volumenrendering von auf dem Server gespeicherten Datensätzen zu erreichen. Die in dieser Doktorarbeit präsentierten Konzepte und Resultate tragen zur weiteren Verbreitung von interaktiver web-basierter Visualisierung bei. Die erzielten algorithmischen und technologischen Fortschritte bilden eine Grundlage für weiterführende Entwicklungen von interaktiven Visualisierungsanwendungen auf Browser-Basis. Gleichzeitig hat dieser Ansatz das Potential, zukünftig kollaborative Visualisierung in der Cloud zu ermöglichen

    Network streaming and compression for mixed reality tele-immersion

    Get PDF
    Bulterman, D.C.A. [Promotor]Cesar, P.S. [Copromotor

    GPU data structures for graphics and vision

    Get PDF
    Graphics hardware has in recent years become increasingly programmable, and its programming APIs use the stream processor model to expose massive parallelization to the programmer. Unfortunately, the inherent restrictions of the stream processor model, used by the GPU in order to maintain high performance, often pose a problem in porting CPU algorithms for both video and volume processing to graphics hardware. Serial data dependencies which accelerate CPU processing are counterproductive for the data-parallel GPU. This thesis demonstrates new ways for tackling well-known problems of large scale video/volume analysis. In some instances, we enable processing on the restricted hardware model by re-introducing algorithms from early computer graphics research. On other occasions, we use newly discovered, hierarchical data structures to circumvent the random-access read/fixed write restriction that had previously kept sophisticated analysis algorithms from running solely on graphics hardware. For 3D processing, we apply known game graphics concepts such as mip-maps, projective texturing, and dependent texture lookups to show how video/volume processing can benefit algorithmically from being implemented in a graphics API. The novel GPU data structures provide drastically increased processing speed, and lift processing heavy operations to real-time performance levels, paving the way for new and interactive vision/graphics applications.Graphikhardware wurde in den letzen Jahren immer weiter programmierbar. Ihre APIs verwenden das Streamprozessor-Modell, um die massive Parallelisierung auch für den Programmierer verfügbar zu machen. Leider folgen aus dem strikten Streamprozessor-Modell, welches die GPU für ihre hohe Rechenleistung benötigt, auch Hindernisse in der Portierung von CPU-Algorithmen zur Video- und Volumenverarbeitung auf die GPU. Serielle Datenabhängigkeiten beschleunigen zwar CPU-Verarbeitung, sind aber für die daten-parallele GPU kontraproduktiv . Diese Arbeit präsentiert neue Herangehensweisen für bekannte Probleme der Video- und Volumensverarbeitung. Teilweise wird die Verarbeitung mit Hilfe von modifizierten Algorithmen aus der frühen Computergraphik-Forschung an das beschränkte Hardwaremodell angepasst. Anderswo helfen neu entdeckte, hierarchische Datenstrukturen beim Umgang mit den Schreibzugriff-Restriktionen die lange die Portierung von komplexeren Bildanalyseverfahren verhindert hatten. In der 3D-Verarbeitung nutzen wir bekannte Konzepte aus der Computerspielegraphik wie Mipmaps, projektive Texturierung, oder verkettete Texturzugriffe, und zeigen auf welche Vorteile die Video- und Volumenverarbeitung aus hardwarebeschleunigter Graphik-API-Implementation ziehen kann. Die präsentierten GPU-Datenstrukturen bieten drastisch schnellere Verarbeitung und heben rechenintensive Operationen auf Echtzeit-Niveau. Damit werden neue, interaktive Bildverarbeitungs- und Graphik-Anwendungen möglich

    Realistic Visualization of Accessories within Interactive Simulation Systems for Garment Prototyping

    Get PDF
    In virtual garment prototyping, designers create a garment design by using Computer Aided Design (CAD). In difference to traditional CAD the word "aided" in this case refers to the computer replicating real world behavior of garments. This allows the designer to interact naturally with his design. The designer has a wide range of expressions within his work. This is done by defining details on a garment which are not limited to the type of cloth used. The way how cloth patterns are sewn together and the style and usage of details of the cloth's surface, like appliqués, have a strong impact on the visual appearance of a garment to a large degree. Therefore, virtual and real garments usually have a lot of such surface details. Interactive virtual garment prototyping itself is an interdisciplinary field. Several problems have to be solved to create an efficiently usable real-time virtual prototyping system for garment manufacturers. Such a system can be roughly separated into three sub-components. The first component deals with acquisition of material and other data needed to let a simulation mimic plausible real world behavior of the garment. The second component is the garment simulation process itself. Finally, the third component is centered on the visualization of the simulation results. Therefore, the overall process spans several scientific areas which have to take into account the needs of each other in order to get an overall interactive system. In my work I especially target the third section, which deals with the visualization. On the scientific side, the developments in the last years have shown great improvements on both speed and reliability of simulation and rendering approaches suitable for the virtual prototyping of garments. However, with the currently existing approaches there are still many problems to be solved, especially if interactive simulation and visualization need to work together and many object and surface details come into play. This is the case when using a virtual prototyping in a productive environment. The currently available approaches try to handle most of the surface details as part of the simulation. This generates a lot of data early in the pipeline which needs to be transferred and processed, requiring a lot of processing time and easily stalls the pipeline defined by the simulation and visualization system. Additionally, real world garment examples are already complicated in their cloth arrangement alone. This requires additional computational power. Therefore, the interactive garment simulation tends to lose its capability to allow interactive handling of the garment. In my work I present a solution, which solves this problem by moving the handling of design details from the simulation stage entirely to a completely GPU based rendering stage. This way, the behavior of the garment and its visual appearance are separated. Therefore, the simulation part can fully concentrate on simulating the fabric behavior, while the visualization handles the placing of surface details lighting, materials and self-shadowing. Thus, a much higher degree of surface complexity can be achieved within an interactive virtual prototyping system as can be done with the current existing approaches

    Characterization of multiphase flows integrating X-ray imaging and virtual reality

    Get PDF
    Multiphase flows are used in a wide variety of industries, from energy production to pharmaceutical manufacturing. However, because of the complexity of the flows and difficulty measuring them, it is challenging to characterize the phenomena inside a multiphase flow. To help overcome this challenge, researchers have used numerous types of noninvasive measurement techniques to record the phenomena that occur inside the flow. One technique that has shown much success is X-ray imaging. While capable of high spatial resolutions, X-ray imaging generally has poor temporal resolution. This research improves the characterization of multiphase flows in three ways. First, an X-ray image intensifier is modified to use a high-speed camera to push the temporal limits of what is possible with current tube source X-ray imaging technology. Using this system, sample flows were imaged at 1000 frames per second without a reduction in spatial resolution. Next, the sensitivity of X-ray computed tomography (CT) measurements to changes in acquisition parameters is analyzed. While in theory CT measurements should be stable over a range of acquisition parameters, previous research has indicated otherwise. The analysis of this sensitivity shows that, while raw CT values are strongly affected by changes to acquisition parameters, if proper calibration techniques are used, acquisition parameters do not significantly influence the results for multiphase flow imaging. Finally, two algorithms are analyzed for their suitability to reconstruct an approximate tomographic slice from only two X-ray projections. These algorithms increase the spatial error in the measurement, as compared to traditional CT; however, they allow for very high temporal resolutions for 3D imaging. The only limit on the speed of this measurement technique is the image intensifier-camera setup, which was shown to be capable of imaging at a rate of at least 1000 FPS. While advances in measurement techniques for multiphase flows are one part of improving multiphase flow characterization, the challenge extends beyond measurement techniques. For improved measurement techniques to be useful, the data must be accessible to scientists in a way that maximizes the comprehension of the phenomena. To this end, this work also presents a system for using the Microsoft Kinect sensor to provide natural, non-contact interaction with multiphase flow data. Furthermore, this system is constructed so that it is trivial to add natural, non-contact interaction to immersive visualization applications. Therefore, multiple visualization applications can be built that are optimized to specific types of data, but all leverage the same natural interaction. Finally, the research is concluded by proposing a system that integrates the improved X-ray measurements, with the Kinect interaction system, and a CAVE automatic virtual environment (CAVE) to present scientists with the multiphase flow measurements in an intuitive and inherently three-dimensional manner

    Mental vision:a computer graphics platform for virtual reality, science and education

    Get PDF
    Despite the wide amount of computer graphics frameworks and solutions available for virtual reality, it is still difficult to find a perfect one fitting at the same time the many constraints of research and educational contexts. Advanced functionalities and user-friendliness, rendering speed and portability, or scalability and image quality are opposite characteristics rarely found into a same approach. Furthermore, fruition of virtual reality specific devices like CAVEs or wearable systems is limited by their costs and accessibility, being most of these innovations reserved to institutions and specialists able to afford and manage them through strong background knowledge in programming. Finally, computer graphics and virtual reality are a complex and difficult matter to learn, due to the heterogeneity of notions a developer needs to practice with before attempting to implement a full virtual environment. In this thesis we describe our contributions to these topics, assembled in what we called the Mental Vision platform. Mental Vision is a framework composed of three main entities. First, a teaching/research oriented graphics engine, simplifying access to 2D/3D real-time rendering on mobile devices, personal computers and CAVE systems. Second, a series of pedagogical modules to introduce and practice computer graphics and virtual reality techniques. Third, two advanced VR systems: a wearable, lightweight and handsfree mixed reality setup, and a four sides CAVE designed through off the shelf hardware. In this dissertation we explain our conceptual, architectural and technical approach, pointing out how we managed to create a robust and coherent solution reducing complexity related to cross-platform and multi-device 3D rendering, and answering simultaneously to contradictory common needs of computer graphics and virtual reality for researchers and students. A series of case studies evaluates how Mental Vision concretely satisfies these needs and achieves its goals on in vitro benchmarks and in vivo scientific and educational projects
    corecore