379 research outputs found

    What's the Situation with Intelligent Mesh Generation: A Survey and Perspectives

    Full text link
    Intelligent Mesh Generation (IMG) represents a novel and promising field of research, utilizing machine learning techniques to generate meshes. Despite its relative infancy, IMG has significantly broadened the adaptability and practicality of mesh generation techniques, delivering numerous breakthroughs and unveiling potential future pathways. However, a noticeable void exists in the contemporary literature concerning comprehensive surveys of IMG methods. This paper endeavors to fill this gap by providing a systematic and thorough survey of the current IMG landscape. With a focus on 113 preliminary IMG methods, we undertake a meticulous analysis from various angles, encompassing core algorithm techniques and their application scope, agent learning objectives, data types, targeted challenges, as well as advantages and limitations. We have curated and categorized the literature, proposing three unique taxonomies based on key techniques, output mesh unit elements, and relevant input data types. This paper also underscores several promising future research directions and challenges in IMG. To augment reader accessibility, a dedicated IMG project page is available at \url{https://github.com/xzb030/IMG_Survey}

    A methodology to compare dimensionality reduction algorithms in terms of loss of quality

    Get PDF
    Dimensionality Reduction (DR) is attracting more attention these days as a result of the increasing need to handle huge amounts of data effectively. DR methods allow the number of initial features to be reduced considerably until a set of them is found that allows the original properties of the data to be kept. However, their use entails an inherent loss of quality that is likely to affect the understanding of the data, in terms of data analysis. This loss of quality could be determinant when selecting a DR method, because of the nature of each method. In this paper, we propose a methodology that allows different DR methods to be analyzed and compared as regards the loss of quality produced by them. This methodology makes use of the concept of preservation of geometry (quality assessment criteria) to assess the loss of quality. Experiments have been carried out by using the most well-known DR algorithms and quality assessment criteria, based on the literature. These experiments have been applied on 12 real-world datasets. Results obtained so far show that it is possible to establish a method to select the most appropriate DR method, in terms of minimum loss of quality. Experiments have also highlighted some interesting relationships between the quality assessment criteria. Finally, the methodology allows the appropriate choice of dimensionality for reducing data to be established, whilst giving rise to a minimum loss of quality

    New insights into the suitability of the third dimension for visualizing multivariate/multidimensional data: a study based on loss of quality quantification

    Get PDF
    Most visualization techniques have traditionally used two-dimensional, instead of three-dimensional representations to visualize multidimensional and multivariate data. In this article, a way to demonstrate the underlying superiority of three-dimensional, with respect to two-dimensional, representation is proposed. Specifically, it is based on the inevitable quality degradation produced when reducing the data dimensionality. The problem is tackled from two different approaches: a visual and an analytical approach. First, a set of statistical tests (point classification, distance perception, and outlier identification) using the two-dimensional and three-dimensional visualization are carried out on a group of 40 users. The results indicate that there is an improvement in the accuracy introduced by the inclusion of a third dimension; however, these results do not allow to obtain definitive conclusions on the superiority of three-dimensional representation. Therefore, in order to draw further conclusions, a deeper study based on an analytical approach is proposed. The aim is to quantify the real loss of quality produced when the data are visualized in two-dimensional and three-dimensional spaces, in relation to the original data dimensionality, to analyze the difference between them. To achieve this, a recently proposed methodology is used. The results obtained by the analytical approach reported that the loss of quality reaches significantly high values only when switching from three-dimensional to two-dimensional representation. The considerable quality degradation suffered in the two-dimensional visualization strongly suggests the suitability of the third dimension to visualize data

    Applied Visualization in the Neurosciences and the Enhancement of Visualization through Computer Graphics

    Get PDF
    The complexity and size of measured and simulated data in many fields of science is increasing constantly. The technical evolution allows for capturing smaller features and more complex structures in the data. To make this data accessible by the scientists, efficient and specialized visualization techniques are required. Maximum efficiency and value for the user can only be achieved by adapting visualization to the specific application area and the specific requirements of the scientific field. Part I: In the first part of my work, I address the visualization in the neurosciences. The neuroscience tries to understand the human brain; beginning at its smallest parts, up to its global infrastructure. To achieve this ambitious goal, the neuroscience uses a combination of three-dimensional data from a myriad of sources, like MRI, CT, or functional MRI. To handle this diversity of different data types and sources, the neuroscience need specialized and well evaluated visualization techniques. As a start, I will introduce an extensive software called \"OpenWalnut\". It forms the common base for developing and using visualization techniques with our neuroscientific collaborators. Using OpenWalnut, standard and novel visualization approaches are available to the neuroscientific researchers too. Afterwards, I am introducing a very specialized method to illustrate the causal relation of brain areas, which was, prior to that, only representable via abstract graph models. I will finalize the first part of my work with an evaluation of several standard visualization techniques in the context of simulated electrical fields in the brain. The goal of this evaluation was clarify the advantages and disadvantages of the used visualization techniques to the neuroscientific community. We exemplified these, using clinically relevant scenarios. Part II: Besides the data preprocessing, which plays a tremendous role in visualization, the final graphical representation of the data is essential to understand structure and features in the data. The graphical representation of data can be seen as the interface between the data and the human mind. The second part of my work is focused on the improvement of structural and spatial perception of visualization -- the improvement of the interface. Unfortunately, visual improvements using computer graphics methods of the computer game industry is often seen sceptically. In the second part, I will show that such methods can be applied to existing visualization techniques to improve spatiality and to emphasize structural details in the data. I will use a computer graphics paradigm called \"screen space rendering\". Its advantage, amongst others, is its seamless applicability to nearly every visualization technique. I will start with two methods that improve the perception of mesh-like structures on arbitrary surfaces. Those mesh structures represent second-order tensors and are generated by a method named \"TensorMesh\". Afterwards I show a novel approach to optimally shade line and point data renderings. With this technique it is possible for the first time to emphasize local details and global, spatial relations in dense line and point data.In vielen Bereichen der Wissenschaft nimmt die Größe und Komplexität von gemessenen und simulierten Daten zu. Die technische Entwicklung erlaubt das Erfassen immer kleinerer Strukturen und komplexerer Sachverhalte. Um solche Daten dem Menschen zugänglich zu machen, benötigt man effiziente und spezialisierte Visualisierungswerkzeuge. Nur die Anpassung der Visualisierung auf ein Anwendungsgebiet und dessen Anforderungen erlaubt maximale Effizienz und Nutzen für den Anwender. Teil I: Im ersten Teil meiner Arbeit befasse ich mich mit der Visualisierung im Bereich der Neurowissenschaften. Ihr Ziel ist es, das menschliche Gehirn zu begreifen; von seinen kleinsten Teilen bis hin zu seiner Gesamtstruktur. Um dieses ehrgeizige Ziel zu erreichen nutzt die Neurowissenschaft vor allem kombinierte, dreidimensionale Daten aus vielzähligen Quellen, wie MRT, CT oder funktionalem MRT. Um mit dieser Vielfalt umgehen zu können, benötigt man in der Neurowissenschaft vor allem spezialisierte und evaluierte Visualisierungsmethoden. Zunächst stelle ich ein umfangreiches Softwareprojekt namens \"OpenWalnut\" vor. Es bildet die gemeinsame Basis für die Entwicklung und Nutzung von Visualisierungstechniken mit unseren neurowissenschaftlichen Kollaborationspartnern. Auf dieser Basis sind klassische und neu entwickelte Visualisierungen auch für Neurowissenschaftler zugänglich. Anschließend stelle ich ein spezialisiertes Visualisierungsverfahren vor, welches es ermöglicht, den kausalen Zusammenhang zwischen Gehirnarealen zu illustrieren. Das war vorher nur durch abstrakte Graphenmodelle möglich. Den ersten Teil der Arbeit schließe ich mit einer Evaluation verschiedener Standardmethoden unter dem Blickwinkel simulierter elektrischer Felder im Gehirn ab. Das Ziel dieser Evaluation war es, der neurowissenschaftlichen Gemeinde die Vor- und Nachteile bestimmter Techniken zu verdeutlichen und anhand klinisch relevanter Fälle zu erläutern. Teil II: Neben der eigentlichen Datenvorverarbeitung, welche in der Visualisierung eine enorme Rolle spielt, ist die grafische Darstellung essenziell für das Verständnis der Strukturen und Bestandteile in den Daten. Die grafische Repräsentation von Daten bildet die Schnittstelle zum Gehirn des Menschen. Der zweite Teile meiner Arbeit befasst sich mit der Verbesserung der strukturellen und räumlichen Wahrnehmung in Visualisierungsverfahren -- mit der Verbesserung der Schnittstelle. Leider werden viele visuelle Verbesserungen durch Computergrafikmethoden der Spieleindustrie mit Argwohn beäugt. Im zweiten Teil meiner Arbeit werde ich zeigen, dass solche Methoden in der Visualisierung angewendet werden können um den räumlichen Eindruck zu verbessern und Strukturen in den Daten hervorzuheben. Dazu nutze ich ein in der Computergrafik bekanntes Paradigma: das \"Screen Space Rendering\". Dieses Paradigma hat den Vorteil, dass es auf nahezu jede existierende Visualiserungsmethode als Nachbearbeitunsgschritt angewendet werden kann. Zunächst führe ich zwei Methoden ein, die die Wahrnehmung von gitterartigen Strukturen auf beliebigen Oberflächen verbessern. Diese Gitter repräsentieren die Struktur von Tensoren zweiter Ordnung und wurden durch eine Methode namens \"TensorMesh\" erzeugt. Anschließend zeige ich eine neuartige Technik für die optimale Schattierung von Linien und Punktdaten. Mit dieser Technik ist es erstmals möglich sowohl lokale Details als auch globale räumliche Zusammenhänge in dichten Linien- und Punktdaten zu erfassen

    Digital Image Access & Retrieval

    Get PDF
    The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio

    Paving the path towards automatic hexahedral mesh generation

    Get PDF
    Esta tesis versa sobre el desarrollo de las tecnologías para la generación de mallas de hexaedros. El proceso de generar una malla de hexaedros no es automático y su generación requiere varias horas te trabajo de un ingeniero especializado. Por lo tanto, es importante desarrollar herramientas que faciliten dicho proceso de generación. Con este fin, se presenta y desarrolla un método de proyección de mallas, una técnica de sweeping o barrido, un algoritmo para la obtención de mallas por bloques, y un entorno de generación de mallas. Las implementaciones más competitivas del método de sweeping utilizan técnicas de proyección de mallas basadas en métodos afines. Los métodos afines más habituales presentan varios problemas relacionados con la obtención de sistemas de ecuaciones normales de rango deficiente. Para solucionar dichos problemas se presenta y analiza un nuevo método afín que depende de dos parámetros vectoriales. Además, se detalla un procedimiento automático para la selección de dichos vectores. El método de proyección resultante preserva la forma de las mallas proyectadas. Esta proyección es incorporada también en una nueva herramienta de sweeping. Dicha herramienta genera capas de nodos internos que respetan la curvatura de las superficies inicial y final. La herramienta de sweeping es capaz de mallar geometrías de extrusión definidas por trayectorias curvas, secciones no constantes a lo largo del eje de sweeping, y superficies inicial y final con diferente forma y curvatura.En las últimas décadas se han propuesto varios ataques para la generación automática de mallas de hexahedros. Sin embargo, todavía no existe un algoritmo rápido y robusto que genere automáticamente mallas de hexaedros de alta calidad. Se propone un nuevo ataque para la generación de mallas por bloques mediante la representación de la geometría y la topología del dual de una malla de hexaedros. En dicho ataque, primero se genera una malla grosera de tetraedros. Después, varió polígonos planos se añaden al interior de los elementos de la malla grosera inicial. Dichos polígonos se denotan como contribuciones duales locales y representan una versión discreta del dual de una malla de hexaedros. En el último paso, la malla por bloques se obtiene como el dual de la representación del dual generada. El algoritmo de generación de mallas por bloques es aplicado a geometrías que presentan diferentes características geométricas como son superficies planas, superficies curvas, configuraciones delgadas, agujeros, y vértices con valencia mayor que tres.Las mallas se generan habitualmente con la ayuda de entornos interactivos que integran una interfaz CAD y varios algoritmos de generación de mallas. Se presenta un nuevo entorno de generación de mallas especializado en la generación de cuadriláteros y hexaedros. Este entorno proporciona la tecnología necesaria para implementar les técnicas de generación de mallas de hexaedros presentadas en esta tesis.This thesis deals with the development of hexahedral mesh generation technology. The process of generating hexahedral meshes is not fully automatic and it is a time consuming task. Therefore, it is important to develop tools that facilitate the generation of hexahedral meshes. To this end, a mesh projection method, a sweeping technique, a block-meshing algorithm, and an interactive mesh generation environment are presented and developed. Competitive implementations of the sweeping method use mesh projection techniques based on affine methods. Standard affine methods have several drawbacks related to the statement of rank deficient sets of normal equations. To overcome these drawbacks a new affine method that depends on two vector parameters is presented and analyzed. Moreover, an automatic procedure that selects these two vector parameters is detailed. The resulting projection procedure preserves the shape of projected meshes. Then, this procedure is incorporated in a new sweeping tool. This tool generates inner layers of nodes that preserve the curvature of the cap surfaces. The sweeping tool is able to mesh extrusion geometries defined by non-linear sweeping trajectories, non-constant cross sections along the sweep axis, non-parallel cap surfaces, and cap surfaces with different shape and curvature. In the last decades, several general-purpose approaches to generate automatically hexahedral meshes have been proposed. However, a fast and robust algorithm that automatically generates high-quality hexahedral meshes is not available. A novel approach for block meshing by representing the geometry and the topology of a hexahedral mesh is presented. The block-meshing algorithm first generates an initial coarse mesh of tetrahedral elements. Second, several planar polygons are added inside the elements of the initial coarse mesh. These polygons are referred as local dual contributions and represent a discrete version of the dual of a hexahedral mesh. Finally, the dual representation is dualized to obtain the final block mesh. The block-meshing algorithm is applied to mesh geometries that present different geometrical characteristics such as planar surfaces, curved surfaces, thin configurations, holes, and vertices with valence greater than three.Meshes are usually generated with the help of interactive environments that integrate a CAD interface and several meshing algorithms. An overview of a new mesh generation environment focused in quadrilateral and hexahedral mesh generation is presented. This environment provides the technology required to implement the hexahedral meshing techniques presented in this thesis.Postprint (published version
    corecore