565 research outputs found

    A Multi-Sensor Fusion-Based Underwater Slam System

    Get PDF
    This dissertation addresses the problem of real-time Simultaneous Localization and Mapping (SLAM) in challenging environments. SLAM is one of the key enabling technologies for autonomous robots to navigate in unknown environments by processing information on their on-board computational units. In particular, we study the exploration of challenging GPS-denied underwater environments to enable a wide range of robotic applications, including historical studies, health monitoring of coral reefs, underwater infrastructure inspection e.g., bridges, hydroelectric dams, water supply systems, and oil rigs. Mapping underwater structures is important in several fields, such as marine archaeology, Search and Rescue (SaR), resource management, hydrogeology, and speleology. However, due to the highly unstructured nature of such environments, navigation by human divers could be extremely dangerous, tedious, and labor intensive. Hence, employing an underwater robot is an excellent fit to build the map of the environment while simultaneously localizing itself in the map. The main contribution of this dissertation is the design and development of a real-time robust SLAM algorithm for small and large scale underwater environments. SVIn – a novel tightly-coupled keyframe-based non-linear optimization framework fusing Sonar, Visual, Inertial and water depth information with robust initialization, loop-closing, and relocalization capabilities has been presented. Introducing acoustic range information to aid the visual data, shows improved reconstruction and localization. The availability of depth information from water pressure enables a robust initialization and refines the scale factor, as well as assists to reduce the drift for the tightly-coupled integration. The complementary characteristics of these sensing v modalities provide accurate and robust localization in unstructured environments with low visibility and low visual features – as such make them the ideal choice for underwater navigation. The proposed system has been successfully tested and validated in both benchmark datasets and numerous real world scenarios. It has also been used for planning for underwater robot in the presence of obstacles. Experimental results on datasets collected with a custom-made underwater sensor suite and an autonomous underwater vehicle (AUV) Aqua2 in challenging underwater environments with poor visibility, demonstrate performance never achieved before in terms of accuracy and robustness. To aid the sparse reconstruction, a contour-based reconstruction approach utilizing the well defined edges between the well lit area and darkness has been developed. In particular, low lighting conditions, or even complete absence of natural light inside caves, results in strong lighting variations, e.g., the cone of the artificial video light intersecting underwater structures and the shadow contours. The proposed method utilizes these contours to provide additional features, resulting into a denser 3D point cloud than the usual point clouds from a visual odometry system. Experimental results in an underwater cave demonstrate the performance of our system. This enables more robust navigation of autonomous underwater vehicles using the denser 3D point cloud to detect obstacles and achieve higher resolution reconstructions

    Novel high performance techniques for high definition computer aided tomography

    Get PDF
    Mención Internacional en el título de doctorMedical image processing is an interdisciplinary field in which multiple research areas are involved: image acquisition, scanner design, image reconstruction algorithms, visualization, etc. X-Ray Computed Tomography (CT) is a medical imaging modality based on the attenuation suffered by the X-rays as they pass through the body. Intrinsic differences in attenuation properties of bone, air, and soft tissue result in high-contrast images of anatomical structures. The main objective of CT is to obtain tomographic images from radiographs acquired using X-Ray scanners. The process of building a 3D image or volume from the 2D radiographs is known as reconstruction. One of the latest trends in CT is the reduction of the radiation dose delivered to patients through the decrease of the amount of acquired data. This reduction results in artefacts in the final images if conventional reconstruction methods are used, making it advisable to employ iterative reconstruction algorithms. There are numerous reconstruction algorithms available, from which we can highlight two specific types: traditional algorithms, which are fast but do not enable the obtaining of high quality images in situations of limited data; and iterative algorithms, slower but more reliable when traditional methods do not reach the quality standard requirements. One of the priorities of reconstruction is the obtaining of the final images in near real time, in order to reduce the time spent in diagnosis. To accomplish this objective, new high performance techniques and methods for accelerating these types of algorithms are needed. This thesis addresses the challenges of both traditional and iterative reconstruction algorithms, regarding acceleration and image quality. One common approach for accelerating these algorithms is the usage of shared-memory and heterogeneous architectures. In this thesis, we propose a novel simulation/reconstruction framework, namely FUX-Sim. This framework follows the hypothesis that the development of new flexible X-ray systems can benefit from computer simulations, which may also enable performance to be checked before expensive real systems are implemented. Its modular design abstracts the complexities of programming for accelerated devices to facilitate the development and evaluation of the different configurations and geometries available. In order to obtain near real execution times, low-level optimizations for the main components of the framework are provided for Graphics Processing Unit (GPU) architectures. Other alternative tackled in this thesis is the acceleration of iterative reconstruction algorithms by using distributed memory architectures. We present a novel architecture that unifies the two most important computing paradigms for scientific computing nowadays: High Performance Computing (HPC). The proposed architecture combines Big Data frameworks with the advantages of accelerated computing. The proposed methods presented in this thesis provide more flexible scanner configurations as they offer an accelerated solution. Regarding performance, our approach is as competitive as the solutions found in the literature. Additionally, we demonstrate that our solution scales with the size of the problem, enabling the reconstruction of high resolution images.El procesamiento de imágenes médicas es un campo interdisciplinario en el que participan múltiples áreas de investigación como la adquisición de imágenes, diseño de escáneres, algoritmos de reconstrucción de imágenes, visualización, etc. La tomografía computarizada (TC) de rayos X es una modalidad de imágen médica basada en el cálculo de la atenuación sufrida por los rayos X a medida que pasan por el cuerpo a escanear. Las diferencias intrínsecas en la atenuación de hueso, aire y tejido blando dan como resultado imágenes de alto contraste de estas estructuras anatómicas. El objetivo principal de la TC es obtener imágenes tomográficas a partir estas radiografías obtenidas mediante escáneres de rayos X. El proceso de construir una imagen o volumen en 3D a partir de las radiografías 2D se conoce como reconstrucción. Una de las últimas tendencias en la tomografía computarizada es la reducción de la dosis de radiación administrada a los pacientes a través de la reducción de la cantidad de datos adquiridos. Esta reducción da como resultado artefactos en las imágenes finales si se utilizan métodos de reconstrucción convencionales, por lo que es aconsejable emplear algoritmos de reconstrucción iterativos. Existen numerosos algoritmos de reconstrucción disponibles a partir de los cuales podemos destacar dos categorías: algoritmos tradicionales, rápidos pero no permiten obtener imágenes de alta calidad en situaciones en las que los datos son limitados; y algoritmos iterativos, más lentos pero más estables en situaciones donde los métodos tradicionales no alcanzan los requisitos en cuanto a la calidad de la imagen. Una de las prioridades de la reconstrucción es la obtención de las imágenes finales en tiempo casi real, con el fin de reducir el tiempo de diagnóstico. Para lograr este objetivo, se necesitan nuevas técnicas y métodos de alto rendimiento para acelerar estos algoritmos. Esta tesis aborda los desafíos de los algoritmos de reconstrucción tradicionales e iterativos, con respecto a la aceleración y la calidad de imagen. Un enfoque común para acelerar estos algoritmos es el uso de arquitecturas de memoria compartida y heterogéneas. En esta tesis, proponemos un nuevo sistema de simulación/reconstrucción, llamado FUX-Sim. Este sistema se construye alrededor de la hipótesis de que el desarrollo de nuevos sistemas de rayos X flexibles puede beneficiarse de las simulaciones por computador, en los que también se puede realizar un control del rendimiento de los nuevos sistemas a desarrollar antes de su implementación física. Su diseño modular abstrae las complejidades de la programación para aceleradores con el objetivo de facilitar el desarrollo y la evaluación de las diferentes configuraciones y geometrías disponibles. Para obtener ejecuciones en casi tiempo real, se proporcionan optimizaciones de bajo nivel para los componentes principales del sistema en las arquitecturas GPU. Otra alternativa abordada en esta tesis es la aceleración de los algoritmos de reconstrucción iterativa mediante el uso de arquitecturas de memoria distribuidas. Presentamos una arquitectura novedosa que unifica los dos paradigmas informáticos más importantes en la actualidad: computación de alto rendimiento (HPC) y Big Data. La arquitectura propuesta combina sistemas Big Data con las ventajas de los dispositivos aceleradores. Los métodos propuestos presentados en esta tesis proporcionan configuraciones de escáner más flexibles y ofrecen una solución acelerada. En cuanto al rendimiento, nuestro enfoque es tan competitivo como las soluciones encontradas en la literatura. Además, demostramos que nuestra solución escala con el tamaño del problema, lo que permite la reconstrucción de imágenes de alta resolución.This work has been mainly funded thanks to a FPU fellowship (FPU14/03875) from the Spanish Ministry of Education. It has also been partially supported by other grants: • DPI2016-79075-R. “Nuevos escenarios de tomografía por rayos X”, from the Spanish Ministry of Economy and Competitiveness. • TIN2016-79637-P Towards unification of HPC and Big Data Paradigms from the Spanish Ministry of Economy and Competitiveness. • Short-term scientific missions (STSM) grant from NESUS COST Action IC1305. • TIN2013-41350-P, Scalable Data Management Techniques for High-End Computing Systems from the Spanish Ministry of Economy and Competitiveness. • RTC-2014-3028-1 NECRA Nuevos escenarios clinicos con radiología avanzada from the Spanish Ministry of Economy and Competitiveness.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: José Daniel García Sánchez.- Secretario: Katzlin Olcoz Herrero.- Vocal: Domenico Tali

    SEUSS: rapid serverless deployment using environment snapshots

    Full text link
    Modern FaaS systems perform well in the case of repeat executions when function working sets stay small. However, these platforms are less effective when applied to more complex, large-scale and dynamic workloads. In this paper, we introduce SEUSS (serverless execution via unikernel snapshot stacks), a new system-level approach for rapidly deploying serverless functions. Through our approach, we demonstrate orders of magnitude improvements in function start times and cacheability, which improves common re-execution paths while also unlocking previously-unsupported large-scale bursty workloads.Published versio

    Counterfactual inference to predict causal knowledge graph for relational transfer learning by assimilating expert knowledge --Relational feature transfer learning algorithm

    Get PDF
    Transfer learning (TL) is a machine learning (ML) method in which knowledge is transferred from the existing models of related problems to the model for solving the problem at hand. Relational TL enables the ML models to transfer the relationship networks from one domain to another. However, it has two critical issues. One is determining the proper way of extracting and expressing relationships among data features in the source domain such that the relationships can be transferred to the target domain. The other is how to do the transfer procedure. Knowledge graphs (KGs) are knowledge bases that use data and logic to graph-structured information; they are helpful tools for dealing with the first issue. The proposed relational feature transfer learning algorithm (RF-TL) embodies an extended structural equation modelling (SEM) as a method for constructing KGs. Additionally, in fields such as medicine, economics, and law related to people’s lives and property safety and security, the knowledge of domain experts is a gold standard. This paper introduces the causal analysis and counterfactual inference in the TL domain that directs the transfer procedure. Different from traditional feature-based TL algorithms like transfer component analysis (TCA) and CORelation Alignment (CORAL), RF-TL not only considers relations between feature items but also utilizes causality knowledge, enabling it to perform well in practical cases. The algorithm was tested on two different healthcare-related datasets — sleep apnea questionnaire study data and COVID-19 case data on ICU admission — and compared its performance with TCA and CORAL. The experimental results show that RF-TL can generate better transferred models that give more accurate predictions with fewer input features

    Distributed Robotic Vision for Calibration, Localisation, and Mapping

    Get PDF
    This dissertation explores distributed algorithms for calibration, localisation, and mapping in the context of a multi-robot network equipped with cameras and onboard processing, comparing against centralised alternatives where all data is transmitted to a singular external node on which processing occurs. With the rise of large-scale camera networks, and as low-cost on-board processing becomes increasingly feasible in robotics networks, distributed algorithms are becoming important for robustness and scalability. Standard solutions to multi-camera computer vision require the data from all nodes to be processed at a central node which represents a significant single point of failure and incurs infeasible communication costs. Distributed solutions solve these issues by spreading the work over the entire network, operating only on local calculations and direct communication with nearby neighbours. This research considers a framework for a distributed robotic vision platform for calibration, localisation, mapping tasks where three main stages are identified: an initialisation stage where calibration and localisation are performed in a distributed manner, a local tracking stage where visual odometry is performed without inter-robot communication, and a global mapping stage where global alignment and optimisation strategies are applied. In consideration of this framework, this research investigates how algorithms can be developed to produce fundamentally distributed solutions, designed to minimise computational complexity whilst maintaining excellent performance, and designed to operate effectively in the long term. Therefore, three primary objectives are sought aligning with these three stages

    Implementation of RESTful web-services as a platform for exchanging information between grid operators

    Get PDF
    This work addresses the information exchange between System Operator for opera tional planning purposes. These System Operators are responsible for the correct func tioning of their electrical grid. With the increasing usage of renewable energy sources the coordination and information exchange between system operators must be improved. Amid the several potential areas of coordination between System Operators (TSO and DSO) the focus of this work relied on information exchange concerning the three phase Short circuit currents at the interface buses between TSO and DSO In order to make this information exchange a web-service Architecture is proposed. The web-service Architecture used was the RESTful which needs to follow the European Union’s specifications of data exchange. A structure for the type of information to be exchanged between Service Operators was developed and mapped to the Portuguese case, the information that was exchanged was the values of Scc and Icc of a given electrical grid (Short Circuit Power, Short Circuit Current) during a 24 hour period. An API (application programming interface) was developed that retrieves the information from a given grid and allows the required calculation. To test the implementation of the solution developed, a IEEE 14 based grid was de veloped using profiles of generation and loads based on real case data from REN (Redes Energéticas Nacionais). To simulate a 24 hour scenario 24 different cases were developed and tested in order to get a profile of the evolution of Scc. As for the RESTful Server it was tested with the API’s output files to ensure the correct transmission of data between the API and the Server. The results obtained on both the API and the Server are positive and reflected the expected outcome. The information retrieved from the API is validated, its output is well structured and contains the necessary information for the data exchange. As for the Server, the tests made were successful in testing the performance and fidelity of the information exchange and all developed functionalities.Neste trabalho é abordado a troca de informação entre Operadores de Sistema, para alcançar um melhor planeamento operacional. A responsabilidade destes Operadores de Sistema baseia-se em zelar pelo correto funcionamento da sua rede elétrica. Com a maior produção de energia renováveis, têm de ser melhorados as vias de coordenação e a troca de informações entre os Operadores do Sistema. O foco deste trabalho foi a partilha de informações sobre correntes de curto-circuito trifásicas nos barramentos de interface entre os Operadores de Sistema (TSO e DSO). Para a troca de informação entre Operadores de sistema ser realizada foi desenvolvido um web-service. O web-service desenvolvido tem como estructura a arquitectura RESTful, tendo por base as normas definidas pela União Europeia. Para o desenvolvimento da troca de informação entre Operadores de Serviço a estrutura do conteúdo da informação foi de finida, e aplicada num caso baseado na realidade Portuguesa. O conteúdo da informação trocada é retirada dos barramentos de uma rede eléctrica, esta informação é referente a valores de corrente de curto-circuito trifásica simétrica para o horizonte temporal de 24 horas. Tendo para este efeito sido desenvolvido uma API (interface de programação de aplicativos). Os testes feitos à API usaram uma rede baseada na IEEE 14, os valores usados para os perfis de geração e de cargas foram baseados em dados de casos reais da REN (Redes Energéticas Nacionais). Os cenários desenvolvidos corresponderam a discretizações de 1 hora no horizonte de 24 horas, serviu para obter um perfil de Scc em cada barramento da rede. Após a correr a API os ficheiros resultantes foram utilizados para testar a troca de informação com o servidor RESTful. Os testes realizados à API e ao servidor foram bem-sucedidos e os resultados retira dos refletem o resultado esperado. Recorreu-se à validação dos valores retirados da API recorrendo a software próprio, com isto constatou-se que a informação retirada da API está valida e passível para a troca de dados. Em relação ao Servidor, todos os testes às funcionalidade do mesmo obtiveram a resposta esperada
    corecore