712 research outputs found

    Scalable exploration of highly detailed and annotated 3D models

    Get PDF
    With the widespread availability of mobile graphics terminals andWebGL-enabled browsers, 3D graphics over the Internet is thriving. Thanks to recent advances in 3D acquisition and modeling systems, high-quality 3D models are becoming increasingly common, and are now potentially available for ubiquitous exploration. In current 3D repositories, such as Blend Swap, 3D Café or Archive3D, 3D models available for download are mostly presented through a few user-selected static images. Online exploration is limited to simple orbiting and/or low-fidelity explorations of simplified models, since photorealistic rendering quality of complex synthetic environments is still hardly achievable within the real-time constraints of interactive applications, especially on on low-powered mobile devices or script-based Internet browsers. Moreover, navigating inside 3D environments, especially on the now pervasive touch devices, is a non-trivial task, and usability is consistently improved by employing assisted navigation controls. In addition, 3D annotations are often used in order to integrate and enhance the visual information by providing spatially coherent contextual information, typically at the expense of introducing visual cluttering. In this thesis, we focus on efficient representations for interactive exploration and understanding of highly detailed 3D meshes on common 3D platforms. For this purpose, we present several approaches exploiting constraints on the data representation for improving the streaming and rendering performance, and camera movement constraints in order to provide scalable navigation methods for interactive exploration of complex 3D environments. Furthermore, we study visualization and interaction techniques to improve the exploration and understanding of complex 3D models by exploiting guided motion control techniques to aid the user in discovering contextual information while avoiding cluttering the visualization. We demonstrate the effectiveness and scalability of our approaches both in large screen museum installations and in mobile devices, by performing interactive exploration of models ranging from 9Mtriangles to 940Mtriangles

    Beyond sight : an approach for visual semantic navigation of mobile robots in an indoor environment

    Get PDF
    Orientador: Eduardo TodtDissertação (mestrado) - Universidade Federal do ParanĂĄ, Setor de CiĂȘncias Exatas, Programa de PĂłs-Graduação em InformĂĄtica. Defesa : Curitiba, 22/02/2021Inclui referĂȘncias: p. 134-146Área de concentração: CiĂȘncia da ComputaçãoResumo: Com o crescimento da automacao, os veiculos nao tripulados tornaram-se um tema de destaque, tanto como produtos comerciais quanto como um topico de pesquisa cientifica. Compoem um campo multidisciplinar de robotica que abrange sistemas embarcados, teoria de controle, planejamento de caminhos, localizacao e mapeamento simultaneos (SLAM), reconstrucao de cenas e reconhecimento de padroes. Apresentamos neste trabalho uma pesquisa exploratoria de como a fusao dos dados de sensores e algoritmos de aprendizagem de maquinas, que compoem o estado da arte, podem realizar a tarefa chamada Navegacao Visual Semantica que e uma navegacao autonoma utilizando observacoes visuais egocentricas para alcancar um objeto pertencente a classe semantica alvo sem conhecimento previo do ambiente. Para realizar experimentos, propomos uma encarnacao chamada VRIBot. O robo foi modelado de tal forma que pode ser facilmente simulado, e os experimentos sao reproduziveis sem a necessidade do robo fisico. Tres diferentes pipelines EXchangeable, AUTOcrat e BEyond foram propostos e avaliados. Nossa abordagem chamada BEyond alcancou a 5a posicao entre 12 no conjunto val_mini do Habitat-Challenge 2020 ObjectNav quando comparada a outros resultados relatados na tabela classificativa da competicao. O resultado da pesquisa mostra que a fusao de dados em conjunto com algoritmos de aprendizado de maquina sao uma abordagem promissora para o problema de navegacao semantica. Palavras-chave: Navegacao-visual-semantica. SLAM. Aprendizado-profundo. Navegacao- Autonoma. Segmentacao-semantica.Abstract: With the rise of automation, unmanned vehicles became a hot topic both as commercial products and as a scientific research topic. It composes a multi-disciplinary field of robotics that encompasses embedded systems, control theory, path planning, Simultaneous Localization and Mapping (SLAM), scene reconstruction, and pattern recognition. In this work, we present our exploratory research of how sensor data fusion and state-of-the-art machine learning algorithms can perform the Embodied Artificial Intelligence (E-AI) task called Visual Semantic Navigation, a.k.a Object-Goal Navigation (ObjectNav) that is an autonomous navigation using egocentric visual observations to reach an object belonging to the target semantic class without prior knowledge of the environment. To perform experimentation, we propose an embodiment named VRIBot. The robot was modeled in such a way that it can be easily simulated, and the experiments are reproducible without the need for the physical robot. Three different pipelines EXchangeable, AUTOcrat, and BEyond, were proposed and evaluated. Our approach, named BEyond, reached 5th rank out of 12 on the val_mini set of the Habitat-Challenge 2020 ObjectNav when compared to other reported results on the competition's leaderboard. Our results show that data fusion combined with machine learning algorithms are a promising approach to the semantic navigation problem. Keywords: Visual-semantic-navigation. Deep-Learning. SLAM. Autonomous-navigation. Semantic-segmentation

    Coastal Eye: Monitoring Coastal Environments Using Lightweight Drones

    Get PDF
    Monitoring coastal environments is a challenging task. This is because of both the logistical demands involved with in-situ data collection and the dynamic nature of the coastal zone, where multiple processes operate over varying spatial and temporal scales. Remote sensing products derived from spaceborne and airborne platforms have proven highly useful in the monitoring of coastal ecosystems, but often they fail to capture fine scale processes and there remains a lack of cost-effective and flexible methods for coastal monitoring at these scales. Proximal sensing technology such as lightweight drones and kites has greatly improved the ability to capture fine spatial resolution data at user-dictated visit times. These approaches are democratising, allowing researchers and managers to collect data in locations and at defined times themselves. In this thesis I develop our scientific understanding of the application of proximal sensing within coastal environments. The two critical review pieces consolidate disparate information on the application of kites as a proximal sensing platform, and the often overlooked hurdles of conducting drone operations in challenging environments. The empirical work presented then tests the use of this technology in three different coastal environments spanning the land-sea interface. Firstly, I use kite aerial photography and uncertainty-assessed structure-from-motion multi-view stereo (SfM-MVS) processing to track changes in coastal dunes over time. I report that sub-decimetre changes (both erosion and accretion) can be detected with this methodology. Secondly, I used lightweight drones to capture fine spatial resolution optical data of intertidal seagrass meadows. I found that estimations of plant cover were more similar to in-situ measures in sparsely populated than densely populated meadows. Lastly, I developed a novel technique utilising lightweight drones and SfM-MVS to measure benthic structural complexity in tropical coral reefs. I found that structural complexity measures were obtainable from SfM-MVS derived point clouds, but that the technique was influenced by glint type artefacts in the image data. Collectively, this work advances the knowledge of proximal sensing in the coastal zone, identifying both the strengths and weaknesses of its application across several ecosystems.Natural Environment Research Council (NERC

    Internet Explorer: The Creative Administration of Digital Geography

    Get PDF
    This thesis is a creative response to the widespread uptake of Google Maps, Earth and Street View and their impact on the future of landscape as a cultural concept. ‘Creative administration’ is introduced as an idiosyncratic system for collecting and interpreting ideas about landscape. The artist’s virtual journeys through digital landscapes are revealed in a series of miniature paintings. Cultural geography contextualises these artworks and other artists’ responses within a broader understanding of contemporary landscape

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    GAC-MAC-SGA 2023 Sudbury Meeting: Abstracts, Volume 46

    Get PDF
    • 

    corecore