10 research outputs found

    Iterative Methods for Visualization of Implicit Surfaces on GPU

    Get PDF
    The original publication is available at www.springerlink.comInternational audienceThe ray-casting of implicit surfaces on GPU has been explored in the last few years. However, until recently, they were restricted to second degree (quadrics). We present an iterative solution to ray cast cubics and quartics on GPU. Our solution targets efficient implementation, obtaining interactive rendering for thousands of surfaces per frame. We have given special attention to torus rendering since it is a useful shape for multiple CAD models. We have tested four different iterative methods, including a novel one, comparing them with classical tessellation solution

    Eight Biennial Report : April 2005 – March 2007

    No full text

    Surface and Sub-Surface Analyses for Bridge Inspection

    Get PDF
    The development of bridge inspection solutions has been discussed in the recent past. In this dissertation, significant development and improvement on the state-of-the-art in the field of bridge inspection using multiple sensors (e.g. ground penetrating radar (GPR) and visual sensor) has been proposed. In the first part of this research (discussed in chapter 3), the focus is towards developing effective and novel methods for rebar detection and localization for sub-surface bridge inspection of steel rebars. The data has been collected using Ground Penetrating Radar (GPR) sensor on real bridge decks. In this regard, a number of different approaches have been successively developed that continue to improve the state-of-the-art in this particular research area. The second part (discussed in chapter 4) of this research deals with the development of an automated system for steel bridge defect detection system using a Multi-Directional Bicycle Robot. The training data has been acquired from actual bridges in Vietnam and validation is performed on data collected using Bicycle Robot from actual bridge located in Highway-80, Lovelock, Nevada, USA. A number of different proposed methods have been discussed in chapter 4. The final chapter of the dissertation will conclude the findings from the different parts and discuss ways of improving on the existing works in the near future

    Pre-Trained Driving in Localized Surroundings with Semantic Radar Information and Machine Learning

    Get PDF
    Entlang der Signalverarbeitungskette von Radar Detektionen bis zur Fahrzeugansteuerung, diskutiert diese Arbeit eine semantischen Radar Segmentierung, einen darauf aufbauenden Radar SLAM, sowie eine im Verbund realisierte autonome Parkfunktion. Die Radarsegmentierung der (statischen) Umgebung wird durch ein Radar-spezifisches neuronales Netzwerk RadarNet erreicht. Diese Segmentierung ermöglicht die Entwicklung des semantischen Radar Graph-SLAM SERALOC. Auf der Grundlage der semantischen Radar SLAM Karte wird eine beispielhafte autonome Parkfunktionalität in einem realen Versuchsträger umgesetzt. Entlang eines aufgezeichneten Referenzfades parkt die Funktion ausschließlich auf Basis der Radar Wahrnehmung mit bisher unerreichter Positioniergenauigkeit. Im ersten Schritt wird ein Datensatz von 8.2 · 10^6 punktweise semantisch gelabelten Radarpunktwolken über eine Strecke von 2507.35m generiert. Es sind keine vergleichbaren Datensätze dieser Annotationsebene und Radarspezifikation öffentlich verfügbar. Das überwachte Training der semantischen Segmentierung RadarNet erreicht 28.97% mIoU auf sechs Klassen. Außerdem wird ein automatisiertes Radar-Labeling-Framework SeRaLF vorgestellt, welches das Radarlabeling multimodal mittels Referenzkameras und LiDAR unterstützt. Für die kohärente Kartierung wird ein Radarsignal-Vorfilter auf der Grundlage einer Aktivierungskarte entworfen, welcher Rauschen und andere dynamische Mehrwegreflektionen unterdrückt. Ein speziell für Radar angepasstes Graph-SLAM-Frontend mit Radar-Odometrie Kanten zwischen Teil-Karten und semantisch separater NDT Registrierung setzt die vorgefilterten semantischen Radarscans zu einer konsistenten metrischen Karte zusammen. Die Kartierungsgenauigkeit und die Datenassoziation werden somit erhöht und der erste semantische Radar Graph-SLAM für beliebige statische Umgebungen realisiert. Integriert in ein reales Testfahrzeug, wird das Zusammenspiel der live RadarNet Segmentierung und des semantischen Radar Graph-SLAM anhand einer rein Radar-basierten autonomen Parkfunktionalität evaluiert. Im Durchschnitt über 42 autonome Parkmanöver (∅3.73 km/h) bei durchschnittlicher Manöverlänge von ∅172.75m wird ein Median absoluter Posenfehler von 0.235m und End-Posenfehler von 0.2443m erreicht, der vergleichbare Radar-Lokalisierungsergebnisse um ≈ 50% übertrifft. Die Kartengenauigkeit von veränderlichen, neukartierten Orten über eine Kartierungsdistanz von ∅165m ergibt eine ≈ 56%-ige Kartenkonsistenz bei einer Abweichung von ∅0.163m. Für das autonome Parken wurde ein gegebener Trajektorienplaner und Regleransatz verwendet

    Algorithms for Reconstruction of Undersampled Atomic Force Microscopy Images

    Get PDF

    The investigation of a method to generate conformal lattice structures for additive manufacturing

    Get PDF
    Additive manufacturing (AM) allows a geometric complexity in products not seen in conventional manufacturing. This geometric freedom facilitates the design and fabrication of conformal hierarchical structures. Entire parts or regions of a part can be populated with lattice structure, designed to exhibit properties that differ from the solid material used in fabrication. Current computer aided design (CAD) software used to design products is not suitable for the generation of lattice structure models. Although conceptually simple, the memory requirements to store a virtual CAD model of a lattice structure are prohibitively high. Conventional CAD software defines geometry through boundary representation (B-rep); shapes are described by the connectivity of faces, edges and vertices. While useful for representing accurate models of complex shape, the sheer quantity of individual surfaces required to represent each of the relatively simple individual struts that comprise a lattice structure ensure that memory limitations are soon reached. Additionally, the conventional data flow from CAD to manufactured part is arduous, involving several conversions between file formats. As well as a lengthy process, each conversion risks the generation of geometric errors that must be fixed before manufacture. A method was developed to specifically generate large arrays of lattice structures, based on a general voxel modelling method identified in the literature review. The method is much less sensitive to geometric complexity than conventional methods and thus facilitates the design of considerably more complex structures. The ability to grade structure designs across regions of a part (termed functional grading ) was also investigated, as well as a method to retain connectivity between boundary struts of a conformal structure. In addition, the method streamlines the data flow from design to manufacture: earlier steps of the data conversion process are bypassed entirely. The effect of the modelling method on surface roughness of parts produced was investigated, as voxel models define boundaries with discrete, stepped blocks. It was concluded that the effect of this stepping on surface roughness was minimal. This thesis concludes with suggestions for further work to improve the efficiency, capability and usability of the conformal structure method developed in this work

    Towards Multi-Level Classification in Deep Plant Identification

    Get PDF
    Tesis de Graduación (Doctorado académico en Ingeniería) Instituto Tecnológico de Costa Rica, 2018.In the last decade, automatic identification of organisms based on computer vision techniques has been a hot topic for both biodiversity scientists and machine learning specialists. Early on, plants became particularly attractive as a subject of study for two main reasons. On the one hand, quick and accurate inventories of plants are critical for biodiversity conservation; for example, they are indispensable in conducting ecosystem inventories, defining models for environmental service payments, and tracking populations of invasive plant species, among others. On the other hand, plants are a more tractable group than, for instance, insects. First of all, the number of species is smaller (around 400,000 compared to more than 8 million). Secondly, they are better understood by the scientific community, particularly with respect to their morphometric features. Thirdly, there are large, fast growing databases of digital images of plants generated by both scientists and the general public. Finally, an incremental approach based first on "flat elements" such as leaves and then the whole plant made it feasible to use computer vision techniques early on. As a result, even mobile apps for the general public are available nowadays. This document presents the key results obtained while tackling the general problem of fully automating the identification of plant species based solely on images. It describes the key findings in a research path that started with a restricted scope, namely, identification of plants from Costa Rica by using a morphometric approach that considers images of fresh leaves only. Then, species from other regions of the world were included, but still using hand-crafted feature extractors. A key methodological turn was the subsequent use of Deep Learning techniques on images of any components of a plant. Then we studied and compared the accuracy of a Deep Learning approach to do identifications based on datasets of images of fresh plants and compared it with datasets of herbarium sheet images for the first time. Among the results obtained during this research, potential biases in automatic plant identification dataset were found and characterized. Feasibility of doing transfer learning between different regions of the world was also proven. Even more importantly, it was for the first time demonstrated that herbarium sheets are a good resource to do identifications of plants mounted on herbarium sheets, which provides additional levels of importance to herbaria around the globe. Finally, as a culmination of this research path, this document presents the results of developing a novel multi-level classification approach that uses knowledge about higher taxonomic levels to carry out not only family and genus level identifications but also to try to improve the accuracy of species level identifications. This last step focuses on the creation of a hierarchical loss function based on known plant taxonomies, coupled with multilevel Deep Learning architectures to guide the model optimization with the prior knowledge of a given class hierarchy.En la última década, la identificación automática de organismos basada en técnicas de visión artificial ha sido un tema popular tanto entre los científicos de la biodiversidad como para los especialistas en aprendizaje automático. Al principio, las plantas se volvieron particularmente atractivas como tema de estudio por dos razones principales. Por un lado, los inventarios rápidos y precisos de plantas son críticos para la conservación de la biodiversidad; por ejemplo, son indispensables para realizar inventarios de ecosistemas, definir modelos para pagos de servicios ambientales y rastrear poblaciones de especies de plantas invasoras, entre otros. Por otro lado, las plantas son un grupo más manejable que, por ejemplo, los insectos. En primer lugar, la cantidad de especies es menor (alrededor de 400,000 en comparación con más de 8 millones de insectos). En segundo lugar, la comunidad científica las comprende mejor, en particular con respecto a sus características morfométricas. En tercer lugar, existen grandes bases de datos de imágenes digitales de plantas generadas tanto por científicos como por el público en general. Finalmente, un enfoque incremental basado primero en "elementos planos" como hojas y luego en toda la planta hizo posible el uso de técnicas de visión por computadora desde el principio. Como resultado, incluso las aplicaciones móviles para el público en general están disponibles en la actualidad. Este documento presenta los resultados clave obtenidos mientras se aborda el problema general de automatizar por completo la identificación de especies de plantas basándose únicamente en imágenes. Describe los hallazgos clave en un camino de investigación que comenzó con un alcance restringido, a saber, la identificación de plantas de Costa Rica mediante el uso de un enfoque morfométrico que considera imágenes de hojas frescas solamente. Luego, se incluyeron especies de otras regiones del mundo, pero todavía se utilizaban extractores de características hechos a mano. Un giro metodológico clave fue el uso posterior de técnicas de aprendizaje profundo (deep learning) en imágenes de cualquier componente de una planta. Luego, estudiamos y comparamos la exactitud de un enfoque de aprendizaje profundo para realizar identificaciones basadas en conjuntos de datos de imágenes de plantas frescas y las comparamos con conjuntos de datos de imágenes de hojas de herbario por primera vez. Entre los resultados obtenidos durante esta investigación, se encontraron y caracterizaron posibles sesgos en el conjunto de datos de identificación automática de plantas. La viabilidad de hacer un aprendizaje de transferencia (transfer learning) entre diferentes regiones del mundo también se demostró. Aún más importante, por primera vez se demostró que las láminas de herbario son un buen recurso para hacer identificaciones de plantas montadas sobre láminas de herbario, lo que proporciona niveles adicionales de importancia para herbarios en todo el mundo. Finalmente, como una culminación de este camino de investigación, este documento presenta los resultados del desarrollo de un nuevo enfoque de clasificación multi-nivel (multi-level) que utiliza el conocimiento sobre niveles taxonómicos superiores para llevar a cabo identificaciones a nivel de familia y género, y también para tratar de mejorar la exactitud de identificaciones a nivel de especie. Este último paso se centra en la creación de una función de pérdida jerárquica basada en taxonomías de plantas conocidas, junto con arquitecturas de aprendizaje profundo de niveles múltiples para guiar la optimización del modelo con el conocimiento previo de una jerarquía de clases dada

    Naval Research Program 2019 Annual Report

    Get PDF
    NPS NRP Annual ReportThe Naval Postgraduate School (NPS) Naval Research Program (NRP) is funded by the Chief of Naval Operations and supports research projects for the Navy and Marine Corps. The NPS NRP serves as a launch-point for new initiatives which posture naval forces to meet current and future operational warfighter challenges. NRP research projects are led by individual research teams that conduct research and through which NPS expertise is developed and maintained. The primary mechanism for obtaining NPS NRP support is through participation at NPS Naval Research Working Group (NRWG) meetings that bring together fleet topic sponsors, NPS faculty members, and students to discuss potential research topics and initiatives.Chief of Naval Operations (CNO)This research is supported by funding from the Naval Postgraduate School, Naval Research Program (PE 0605853N/2098). https://nps.edu/nrpChief of Naval Operations (CNO)Approved for public release. Distribution is unlimited.
    corecore