6 research outputs found

    Le nuage de point intelligent

    Full text link
    Discrete spatial datasets known as point clouds often lay the groundwork for decision-making applications. E.g., we can use such data as a reference for autonomous cars and robot’s navigation, as a layer for floor-plan’s creation and building’s construction, as a digital asset for environment modelling and incident prediction... Applications are numerous, and potentially increasing if we consider point clouds as digital reality assets. Yet, this expansion faces technical limitations mainly from the lack of semantic information within point ensembles. Connecting knowledge sources is still a very manual and time-consuming process suffering from error-prone human interpretation. This highlights a strong need for domain-related data analysis to create a coherent and structured information. The thesis clearly tries to solve automation problematics in point cloud processing to create intelligent environments, i.e. virtual copies that can be used/integrated in fully autonomous reasoning services. We tackle point cloud questions associated with knowledge extraction – particularly segmentation and classification – structuration, visualisation and interaction with cognitive decision systems. We propose to connect both point cloud properties and formalized knowledge to rapidly extract pertinent information using domain-centered graphs. The dissertation delivers the concept of a Smart Point Cloud (SPC) Infrastructure which serves as an interoperable and modular architecture for a unified processing. It permits an easy integration to existing workflows and a multi-domain specialization through device knowledge, analytic knowledge or domain knowledge. Concepts, algorithms, code and materials are given to replicate findings and extend current applications.Les ensembles discrets de données spatiales, appelés nuages de points, forment souvent le support principal pour des scénarios d’aide à la décision. Par exemple, nous pouvons utiliser ces données comme référence pour les voitures autonomes et la navigation des robots, comme couche pour la création de plans et la construction de bâtiments, comme actif numérique pour la modélisation de l'environnement et la prédiction d’incidents... Les applications sont nombreuses et potentiellement croissantes si l'on considère les nuages de points comme des actifs de réalité numérique. Cependant, cette expansion se heurte à des limites techniques dues principalement au manque d'information sémantique au sein des ensembles de points. La création de liens avec des sources de connaissances est encore un processus très manuel, chronophage et lié à une interprétation humaine sujette à l'erreur. Cela met en évidence la nécessité d'une analyse automatisée des données relatives au domaine étudié afin de créer une information cohérente et structurée. La thèse tente clairement de résoudre les problèmes d'automatisation dans le traitement des nuages de points pour créer des environnements intelligents, c'est-àdire des copies virtuelles qui peuvent être utilisées/intégrées dans des services de raisonnement totalement autonomes. Nous abordons plusieurs problématiques liées aux nuages de points et associées à l'extraction des connaissances - en particulier la segmentation et la classification - la structuration, la visualisation et l'interaction avec les systèmes cognitifs de décision. Nous proposons de relier à la fois les propriétés des nuages de points et les connaissances formalisées pour extraire rapidement les informations pertinentes à l'aide de graphes centrés sur le domaine. La dissertation propose le concept d'une infrastructure SPC (Smart Point Cloud) qui sert d'architecture interopérable et modulaire pour un traitement unifié. Elle permet une intégration facile aux flux de travail existants et une spécialisation multidomaine grâce aux connaissances liée aux capteurs, aux connaissances analytiques ou aux connaissances de domaine. Plusieurs concepts, algorithmes, codes et supports sont fournis pour reproduire les résultats et étendre les applications actuelles.Diskrete räumliche Datensätze, so genannte Punktwolken, bilden oft die Grundlage für Entscheidungsanwendungen. Beispielsweise können wir solche Daten als Referenz für autonome Autos und Roboternavigation, als Ebene für die Erstellung von Grundrissen und Gebäudekonstruktionen, als digitales Gut für die Umgebungsmodellierung und Ereignisprognose verwenden... Die Anwendungen sind zahlreich und nehmen potenziell zu, wenn wir Punktwolken als Digital Reality Assets betrachten. Allerdings stößt diese Erweiterung vor allem durch den Mangel an semantischen Informationen innerhalb von Punkt-Ensembles auf technische Grenzen. Die Verbindung von Wissensquellen ist immer noch ein sehr manueller und zeitaufwendiger Prozess, der unter fehleranfälliger menschlicher Interpretation leidet. Dies verdeutlicht den starken Bedarf an domänenbezogenen Datenanalysen, um eine kohärente und strukturierte Information zu schaffen. Die Arbeit versucht eindeutig, Automatisierungsprobleme in der Punktwolkenverarbeitung zu lösen, um intelligente Umgebungen zu schaffen, d.h. virtuelle Kopien, die in vollständig autonome Argumentationsdienste verwendet/integriert werden können. Wir befassen uns mit Punktwolkenfragen im Zusammenhang mit der Wissensextraktion - insbesondere Segmentierung und Klassifizierung - Strukturierung, Visualisierung und Interaktion mit kognitiven Entscheidungssystemen. Wir schlagen vor, sowohl Punktwolkeneigenschaften als auch formalisiertes Wissen zu verbinden, um schnell relevante Informationen mithilfe von domänenzentrierten Grafiken zu extrahieren. Die Dissertation liefert das Konzept einer Smart Point Cloud (SPC) Infrastruktur, die als interoperable und modulare Architektur für eine einheitliche Verarbeitung dient. Es ermöglicht eine einfache Integration in bestehende Workflows und eine multidimensionale Spezialisierung durch Gerätewissen, analytisches Wissen oder Domänenwissen. Konzepte, Algorithmen, Code und Materialien werden zur Verfügung gestellt, um Erkenntnisse zu replizieren und aktuelle Anwendungen zu erweitern

    Whole-Mouse Brain Vascular Analysis Framework: Synthetic Model-Based Validation, Informatics Platform, and Queryable Database

    Get PDF
    The past decade has seen innovative advancements in light microscopy instrumentation that have afforded the acquisition of whole-brain datasets at micrometer resolution. As the hardware and software used to automate the traditional neuroanatomical workflow become more accessible to researchers around the globe, so will the tools needed to analyze whole-brain datasets. Only recently has the focus begun to shift from the development of instrumentation towards platforms for data-driven quantitative analyses. As a consequence of this, the tools required for large-scale quantitative studies across the whole brain are few and far between. In this dissertation, we aim to change this through the development of a standardized, quantitative approach to the study of whole-brain, cerebrovasculature datasets. Our standardized and quantitative approach has four components. The first is the construction of synthetic cerebrovasculature models that can be used in conjunction with the second component, a model-based validation system. Any cerebrovasculature study conducted using imaging data must first extract the filaments embedded within that dataset. The segmentation algorithms that are commonly used to do this are frequently validated on small-scale datasets that represent only a small selection of cerebrovasculature variability. The question is how do these algorithms perform when applied to large-scale datasets. Our model-based validation system uses biologically inspired, large-scale datasets that asses the accuracy of the segmentation algorithm output against ground truth data. Once the data is segmented, we have implemented an informatics platform that calculates descriptive statistics across the entire volume. Attributes describing each vascular filament are also calculated. These include measures of vascular radius, length, surface area, volume, tortuosity, and others. The result is a massive amount of data for the cerebrovasculature segments. The question becomes how can this be analyzed sensibly. Given that both cerebrovasculature topology and geometry can be capture in graph form, we construct the fourth component of our system: a graph database that stores the cerebrovasculature. The graph model of cerebrovasculature that we have developed allows segments to be searched across the whole-brain based on their attributes and/or location. We also implemented a means to reconstruct the segments returned by a specific query for visualizations. This means that a simple text-based query can retrieve cerebrovasculature geometry and topology of the specified vasculature. For example, a query can return all vessels within the frontal cortex, those with specific attribute(s) value range(s), or any combination of attribute and location. Complex graph algorithms can also be applied, such as the shortest path between two bifurcation points or measures of centrality that are important in determining the robust and fragile aspects of blood flow through the cerebrovasculature system. To illustrate the utility of our system, we construct a whole-brain database of vascular connectivity from the Knife-Edge Scanning Microscope India Ink dataset. Using our cerebrovasculature database, we were able to study the cerebrovasculature system by issuing text-based queries to extract the vessel segments that we were interested in. The outcome of our investigation was a wealth of information about the cerebrovasculature system as a whole, and about the different classifications of vessels comprising it. The results returned from these simple queries even generated some interesting and biologically relevant questions. For instance, the profound spikes in radius distribution for some classes of vessels that did not present in other classes. We expect that the methods described in this dissertation will open the door for data-driven, quantitative investigation across the whole-brain. At the time of writing – and to the best of our knowledge that prior to this work – there was not a systemic way to assess segmentation algorithm performance, calculate attributes for each segment of vasculature extracted across the whole brain, and store those results in a queryable database that also stores geometry and topology of the entire cerebrovasculature system. We believe that our method can and will set the standard for largescale cerebrovasculature research. Therefore, in conclusion, we state that our methods contribute a standardized, quantitative approach to the study of cerebrovasculature datasets acquired using modern imaging techniques

    Interactive visualization of computational fluid dynamics data.

    Get PDF
    This thesis describes a literature study and a practical research in the area of flow visualization, with special emphasis on the interactive visualization of Computational Fluid Dynamics (CFD) datasets. Given the four main categories of flow visualization methodology; direct, geometric, texture-based and feature-based flow visualization, the research focus of our thesis is on the direct, geometric and feature-based techniques. And the feature-based flow visualization is highlighted in this thesis. After we present an overview of the state-of-the-art of the recent developments in the flow visualization in higher spatial dimensions (2.5D, 3D and 4D), we propose a fast, simple, and interactive glyph placement algorithm for investigating and visualizing boundary flow data based on unstructured, adaptive resolution boundary meshes from CFD dataset. Afterward, we propose a novel, automatic mesh-driven vector field clustering algorithm which couples the properties of the vector field and resolution of underlying mesh into a unified distance measure for producing high-level, intuitive and suggestive visualization of large, unstructured, adaptive resolution boundary CFD meshes based vector fields. Next we present a novel application with multiple-coordinated views for interactive information-assisted visualization of multidimensional marine turbine CFD data. Information visualization techniques are combined with user interaction to exploit our cognitive ability for intuitive extraction of flow features from CFD datasets. Later, we discuss the design and implementation of each visualization technique used in our interactive flow visualization framework, such as glyphs, streamlines, parallel coordinate plots, etc. In this thesis, we focus on the interactive visualization of the real-world CFD datasets, and present a number of new methods or algorithms to address several related challenges in flow visualization. We strongly believe that the user interaction is a crucial part of an effective data analysis and visualization of large and complex datasets such as CFD datasets we use in this thesis. In order to demonstrate the use of the proposed techniques in this thesis, CFD domain experts reviews are also provided

    Neural Radiance Fields: Past, Present, and Future

    Full text link
    The various aspects like modeling and interpreting 3D environments and surroundings have enticed humans to progress their research in 3D Computer Vision, Computer Graphics, and Machine Learning. An attempt made by Mildenhall et al in their paper about NeRFs (Neural Radiance Fields) led to a boom in Computer Graphics, Robotics, Computer Vision, and the possible scope of High-Resolution Low Storage Augmented Reality and Virtual Reality-based 3D models have gained traction from res with more than 1000 preprints related to NeRFs published. This paper serves as a bridge for people starting to study these fields by building on the basics of Mathematics, Geometry, Computer Vision, and Computer Graphics to the difficulties encountered in Implicit Representations at the intersection of all these disciplines. This survey provides the history of rendering, Implicit Learning, and NeRFs, the progression of research on NeRFs, and the potential applications and implications of NeRFs in today's world. In doing so, this survey categorizes all the NeRF-related research in terms of the datasets used, objective functions, applications solved, and evaluation criteria for these applications.Comment: 413 pages, 9 figures, 277 citation

    8th. International congress on archaeology computer graphica. Cultural heritage and innovation

    Full text link
    El lema del Congreso es: 'Documentación 3D avanzada, modelado y reconstrucción de objetos patrimoniales, monumentos y sitios.Invitamos a investigadores, profesores, arqueólogos, arquitectos, ingenieros, historiadores de arte... que se ocupan del patrimonio cultural desde la arqueología, la informática gráfica y la geomática, a compartir conocimientos y experiencias en el campo de la Arqueología Virtual. La participación de investigadores y empresas de prestigio será muy apreciada. Se ha preparado un atractivo e interesante programa para participantes y visitantes.Lerma García, JL. (2016). 8th. International congress on archaeology computer graphica. Cultural heritage and innovation. Editorial Universitat Politècnica de València. http://hdl.handle.net/10251/73708EDITORIA

    The Use of Games and Crowdsourcing for the Fabrication-aware Design of Residential Buildings

    Get PDF
    State-of-the-art participatory design acknowledges the true, ill-defined nature of design problems, taking into account stakeholders' values and preferences. However, it overburdens the architect, who has to synthesize far more constraints into a one-of-a-kind design. Generative Design promises to equip architects with great power to standardize and systemize the design process. However, the common trap of generative design is trying to treat architecture simply as a tame problem. In this work, I investigate the use of games and crowdsourcing in architecture through two sets of explorative questions. First, if everyone can participate in the network-enabled creation of the built environment, what role will they play? And what tools will they need to enable them? And second, if anyone can use digital fabrication to build any building, how will we design it? What design paradigms will govern this process? I present a map of design paradigms that lie at the intersections of Participatory Design, Generative Design, Game Design, and Crowd Wisdom. In four case studies, I explore techniques to employ the practices from the four fields in the service of architecture. Generative Design can lower the difficulty of the challenge to design by automating a large portion of the work. A newly formulated, unified taxonomy of generative design across the disciplines of architecture, computer science, and computer games builds the base for the use of algorithms in the case studies. The work introduces Playable Voxel-Shape Grammars, a new type of generative technique. It enables Game Design to guide participants through a series of challenges, effectively increasing their skills by helping them understand the underlying principles of the design task at hand. The use of crowdsourcing in architecture can mean thousands of architects creating content for a generative design system, to expand and open up its design space. Crowdsourcing can also be about millions of people online creating designs that an architect or a homeowner can refer to increase their understanding of the complex issues at hand in a given design project and for better decision making. At the same time, game design in architecture helps find the balance between algorithmically exploring pre-defined design alternatives and open-ended, free creativity. The research reveals a layered structure of entry points for crowd-contributed content as well as the granular nature of authorship among four different roles: non-expert stakeholders, architects, the crowd, and the tool-makers
    corecore