754 research outputs found

    Viewfinder: final activity report

    Get PDF
    The VIEW-FINDER project (2006-2009) is an 'Advanced Robotics' project that seeks to apply a semi-autonomous robotic system to inspect ground safety in the event of a fire. Its primary aim is to gather data (visual and chemical) in order to assist rescue personnel. A base station combines the gathered information with information retrieved from off-site sources. The project addresses key issues related to map building and reconstruction, interfacing local command information with external sources, human-robot interfaces and semi-autonomous robot navigation. The VIEW-FINDER system is a semi-autonomous; the individual robot-sensors operate autonomously within the limits of the task assigned to them, that is, they will autonomously navigate through and inspect an area. Human operators monitor their operations and send high level task requests as well as low level commands through the interface to any nodes in the entire system. The human interface has to ensure the human supervisor and human interveners are provided a reduced but good and relevant overview of the ground and the robots and human rescue workers therein

    Sensor Modalities and Fusion for Robust Indoor Localisation

    Get PDF

    H4LO:Automation Platform for Efficient RF Fingerprinting using SLAM-derived Map and Poses

    Get PDF
    © 2020 The Institution of Engineering and Technology. One of the main shortcomings of received signal strength-based indoor localisation techniques is the labour and timecost involved in acquiring labelled \u27ground-truth\u27 training data. This training data is often obtained through fingerprinting, whichinvolves visiting all prescribed locations to capture sensor observations throughout the environment. In this work, the authorspresent a helmet for localisation optimisation (H4LO): a low-cost robotic system designed to cut down on said labour by utilisingan off-the-shelf light detection and ranging device. This system allows for simultaneous localisation and mapping, providing thehuman user with accurate pose estimation and a corresponding map of the environment. The high-resolution location estimationcan then be used to train a positioning model, where received signal strength data is acquired from a human-worn wearabledevice. The method is evaluated using live measurements, recorded within a residential property. They compare the groundtruthlocation labels generated automatically by the H4LO system with a camera-based fingerprinting technique from previous work.They find that the system remains comparable in performance to the less efficient camera-based method, whilst removing theneed for time-consuming labour associated with registering the user\u27s location

    Augmented Reality and GPS-Based Resource Efficient Navigation System for Outdoor Environments: Integrating Device Camera, Sensors, and Storage

    Get PDF
    Contemporary navigation systems rely upon localisation accuracy and humongous spatial data for navigational assistance. Such spatial-data sources may have access restrictions or quality issues and require massive storage space. Affordable high-performance mobile consumer hardware and smart software have resulted in the popularity of AR and VR technologies. These technologies can help to develop sustainable devices for navigation. This paper introduces a robust, memory-efficient, augmented-reality-based navigation system for outdoor environments using crowdsourced spatial data, a device camera, and mapping algorithms. The proposed system unifies the basic map information, points of interest, and individual GPS trajectories of moving entities to generate and render the mapping information. This system can perform map localisation, pathfinding, and visualisation using a low-power mobile device. A case study was undertaken to evaluate the proposed system. It was observed that the proposed system resulted in a 29 percent decrease in CPU load and a 35 percent drop in memory requirements. As spatial information was stored as comma-separated values, it required almost negligible storage space compared to traditional spatial databases. The proposed navigation system attained a maximum accuracy of 99 percent with a root mean square error value of 0.113 and a minimum accuracy of 96 percent with a corresponding root mean square value of 0.17

    A Review of pedestrian indoor positioning systems for mass market applications

    Get PDF
    In the last decade, the interest in Indoor Location Based Services (ILBS) has increased stimulating the development of Indoor Positioning Systems (IPS). In particular, ILBS look for positioning systems that can be applied anywhere in the world for millions of users, that is, there is a need for developing IPS for mass market applications. Those systems must provide accurate position estimations with minimum infrastructure cost and easy scalability to different environments. This survey overviews the current state of the art of IPSs and classifies them in terms of the infrastructure and methodology employed. Finally, each group is reviewed analysing its advantages and disadvantages and its applicability to mass market applications

    Pose estimation system based on monocular cameras

    Get PDF
    Our world is full of wonders. It is filled with mysteries and challenges, which through the ages inspired and called for the human civilization to grow itself, either philosophically or sociologically. In time, humans reached their own physical limitations; nevertheless, we created technology to help us overcome it. Like the ancient uncovered land, we are pulled into the discovery and innovation of our time. All of this is possible due to a very human characteristic - our imagination. The world that surrounds us is mostly already discovered, but with the power of computer vision (CV) and augmented reality (AR), we are able to live in multiple hidden universes alongside our own. With the increasing performance and capabilities of the current mobile devices, AR is what we dream it can be. There are still many obstacles, but this future is already our reality, and with the evolving technologies closing the gap between the real and the virtual world, soon it will be possible for us to surround ourselves into other dimensions, or fuse them with our own. This thesis focuses on the development of a system to predict the camera’s pose estimation in the real-world regarding to the virtual world axis. The work was developed as a sub-module integrated on the M5SAR project: Mobile Five Senses Augmented Reality System for Museums, aiming to a more immerse experience with the total or partial replacement of the environments’ surroundings. It is based mainly on man-made buildings indoors and their typical rectangular cuboid shape. With the possibility of knowing the user’s camera direction, we can then superimpose dynamic AR content, inviting the user to explore the hidden worlds. The M5SAR project introduced a new way to explore the existent historical museums by exploring the human’s five senses: hearing, smell, taste, touch, vision. With this innovative technology, the user is able to enhance their visitation and immerse themselves into a virtual world blended with our reality. A mobile device application was built containing an innovating framework: MIRAR - Mobile Image Recognition based Augmented Reality - containing object recognition, navigation, and additional AR information projection in order to enrich the users’ visit, providing an intuitive and compelling information regarding the available artworks, exploring the hearing and vision senses. A device specially designed was built to explore the additional three senses: smell, taste and touch which, when attached to a mobile device, either smartphone or tablet, would pair with it and automatically react in with the offered narrative related to the artwork, immersing the user with a sensorial experience. As mentioned above, the work presented on this thesis is relative to a sub-module of the MIRAR regarding environment detection and the superimposition of AR content. With the main goal being the full replacement of the walls’ contents, and with the possibility of keeping the artwork visible or not, it presented an additional challenge with the limitation of using only monocular cameras. Without the depth information, any 2D image of an environment, to a computer doesn’t represent the tridimensional layout of the real-world dimensions. Nevertheless, man-based building tends to follow a rectangular approach to divisions’ constructions, which allows for a prediction to where the vanishing point on any environment image may point, allowing the reconstruction of an environment’s layout from a 2D image. Furthermore, combining this information with an initial localization through an improved image recognition to retrieve the camera’s spatial position regarding to the real-world coordinates and the virtual-world, alas, pose estimation, allowed for the possibility of superimposing specific localized AR content over the user’s mobile device frame, in order to immerse, i.e., a museum’s visitor into another era correlated to the present artworks’ historical period. Through the work developed for this thesis, it was also presented a better planar surface in space rectification and retrieval, a hybrid and scalable multiple images matching system, a more stabilized outlier filtration applied to the camera’s axis, and a continuous tracking system that works with uncalibrated cameras and is able to achieve particularly obtuse angles and still maintain the surface superimposition. Furthermore, a novelty method using deep learning models for semantic segmentation was introduced for indoor layout estimation based on monocular images. Contrary to the previous developed methods, there is no need to perform geometric calculations to achieve a near state of the art performance with a fraction of the parameters required by similar methods. Contrary to the previous work presented on this thesis, this method performs well even in unseen and cluttered rooms if they follow the Manhattan assumption. An additional lightweight application to retrieve the camera pose estimation is presented using the proposed method.O nosso mundo está repleto de maravilhas. Está cheio de mistérios e desafios, os quais, ao longo das eras, inspiraram e impulsionaram a civilização humana a evoluir, seja filosófica ou sociologicamente. Eventualmente, os humanos foram confrontados com os seus limites físicos; desta forma, criaram tecnologias que permitiram superá-los. Assim como as terras antigas por descobrir, somos impulsionados à descoberta e inovação da nossa era, e tudo isso é possível graças a uma característica marcadamente humana: a nossa imaginação. O mundo que nos rodeia está praticamente todo descoberto, mas com o poder da visão computacional (VC) e da realidade aumentada (RA), podemos viver em múltiplos universos ocultos dentro do nosso. Com o aumento da performance e das capacidades dos dispositivos móveis da atualidade, a RA pode ser exatamente aquilo que sonhamos. Continuam a existir muitos obstáculos, mas este futuro já é o nosso presente, e com a evolução das tecnologias a fechar o fosso entre o mundo real e o mundo virtual, em breve será possível cercarmo-nos de outras dimensões, ou fundi-las dentro da nossa. Esta tese foca-se no desenvolvimento de um sistema de predição para a estimação da pose da câmara no mundo real em relação ao eixo virtual do mundo. Este trabalho foi desenvolvido como um sub-módulo integrado no projeto M5SAR: Mobile Five Senses Augmented Reality System for Museums, com o objetivo de alcançar uma experiência mais imersiva com a substituição total ou parcial dos limites do ambiente. Dedica-se ao interior de edifícios de arquitetura humana e a sua típica forma de retângulo cuboide. Com a possibilidade de saber a direção da câmara do dispositivo, podemos então sobrepor conteúdo dinâmico de RA, num convite ao utilizador para explorar os mundos ocultos. O projeto M5SAR introduziu uma nova forma de explorar os museus históricos existentes através da exploração dos cinco sentidos humanos: a audição, o cheiro, o paladar, o toque e a visão. Com essa tecnologia inovadora, o utilizador pode engrandecer a sua visita e mergulhar num mundo virtual mesclado com a nossa realidade. Uma aplicação para dispositivo móvel foi criada, contendo uma estrutura inovadora: MIRAR - Mobile Image Recognition based Augmented Reality - a possuir o reconhecimento de objetos, navegação e projeção de informação de RA adicional, de forma a enriquecer a visita do utilizador, a fornecer informação intuitiva e interessante em relação às obras de arte disponíveis, a explorar os sentidos da audição e da visão. Foi também desenhado um dispositivo para exploração em particular dos três outros sentidos adicionais: o cheiro, o toque e o sabor. Este dispositivo, quando afixado a um dispositivo móvel, como um smartphone ou tablet, emparelha e reage com este automaticamente com a narrativa relacionada à obra de arte, a imergir o utilizador numa experiência sensorial. Como já referido, o trabalho apresentado nesta tese é relativo a um sub-módulo do MIRAR, relativamente à deteção do ambiente e a sobreposição de conteúdo de RA. Sendo o objetivo principal a substituição completa dos conteúdos das paredes, e com a possibilidade de manter as obras de arte visíveis ou não, foi apresentado um desafio adicional com a limitação do uso de apenas câmaras monoculares. Sem a informação relativa à profundidade, qualquer imagem bidimensional de um ambiente, para um computador isso não se traduz na dimensão tridimensional das dimensões do mundo real. No entanto, as construções de origem humana tendem a seguir uma abordagem retangular às divisões dos edifícios, o que permite uma predição de onde poderá apontar o ponto de fuga de qualquer ambiente, a permitir a reconstrução da disposição de uma divisão através de uma imagem bidimensional. Adicionalmente, ao combinar esta informação com uma localização inicial através de um reconhecimento por imagem refinado, para obter a posição espacial da câmara em relação às coordenadas do mundo real e do mundo virtual, ou seja, uma estimativa da pose, foi possível alcançar a possibilidade de sobrepor conteúdo de RA especificamente localizado sobre a moldura do dispositivo móvel, de maneira a imergir, ou seja, colocar o visitante do museu dentro de outra era, relativa ao período histórico da obra de arte em questão. Ao longo do trabalho desenvolvido para esta tese, também foi apresentada uma melhor superfície planar na recolha e retificação espacial, um sistema de comparação de múltiplas imagens híbrido e escalável, um filtro de outliers mais estabilizado, aplicado ao eixo da câmara, e um sistema de tracking contínuo que funciona com câmaras não calibradas e que consegue obter ângulos particularmente obtusos, continuando a manter a sobreposição da superfície. Adicionalmente, um algoritmo inovador baseado num modelo de deep learning para a segmentação semântica foi introduzido na estimativa do traçado com base em imagens monoculares. Ao contrário de métodos previamente desenvolvidos, não é necessário realizar cálculos geométricos para obter um desempenho próximo ao state of the art e ao mesmo tempo usar uma fração dos parâmetros requeridos para métodos semelhantes. Inversamente ao trabalho previamente apresentado nesta tese, este método apresenta um bom desempenho mesmo em divisões sem vista ou obstruídas, caso sigam a mesma premissa Manhattan. Uma leve aplicação adicional para obter a posição da câmara é apresentada usando o método proposto
    corecore