302 research outputs found

    LiDAR-Based Place Recognition For Autonomous Driving: A Survey

    Full text link
    LiDAR-based place recognition (LPR) plays a pivotal role in autonomous driving, which assists Simultaneous Localization and Mapping (SLAM) systems in reducing accumulated errors and achieving reliable localization. However, existing reviews predominantly concentrate on visual place recognition (VPR) methods. Despite the recent remarkable progress in LPR, to the best of our knowledge, there is no dedicated systematic review in this area. This paper bridges the gap by providing a comprehensive review of place recognition methods employing LiDAR sensors, thus facilitating and encouraging further research. We commence by delving into the problem formulation of place recognition, exploring existing challenges, and describing relations to previous surveys. Subsequently, we conduct an in-depth review of related research, which offers detailed classifications, strengths and weaknesses, and architectures. Finally, we summarize existing datasets, commonly used evaluation metrics, and comprehensive evaluation results from various methods on public datasets. This paper can serve as a valuable tutorial for newcomers entering the field of place recognition and for researchers interested in long-term robot localization. We pledge to maintain an up-to-date project on our website https://github.com/ShiPC-AI/LPR-Survey.Comment: 26 pages,13 figures, 5 table

    Neural Reflectance Decomposition

    Get PDF
    Die Erstellung von fotorealistischen Modellen von Objekten aus Bildern oder Bildersammlungen ist eine grundlegende Herausforderung in der Computer Vision und Grafik. Dieses Problem wird auch als inverses Rendering bezeichnet. Eine der grĂ¶ĂŸten Herausforderungen bei dieser Aufgabe ist die vielfĂ€ltige AmbiguitĂ€t. Der Prozess Bilder aus 3D-Objekten zu erzeugen wird Rendering genannt. Allerdings beeinflussen sich mehrere Eigenschaften wie Form, Beleuchtung und die ReflektivitĂ€t der OberflĂ€che gegenseitig. ZusĂ€tzlich wird eine Integration dieser EinflĂŒsse durchgefĂŒhrt, um das endgĂŒltige Bild zu erzeugen. Die Umkehrung dieser integrierten AbhĂ€ngigkeiten ist eine Ă€ußerst schwierige und mehrdeutige Aufgabenstellung. Die Lösung dieser Aufgabe ist jedoch von entscheidender Bedeutung, da die automatisierte Erstellung solcher wieder beleuchtbaren Objekte verschiedene Anwendungen in den Bereichen Online-Shopping, Augmented Reality (AR), Virtual Reality (VR), Spiele oder Filme hat. In dieser Arbeit werden zwei AnsĂ€tze zur Lösung dieser Aufgabe beschrieben. Erstens wird eine Netzwerkarchitektur vorgestellt, die die Erfassung eines Objekts und dessen Materialien von zwei Aufnahmen ermöglicht. Der Grad der Blicksynthese von diesen Objekten ist jedoch begrenzt, da bei der Dekomposition nur eine einzige Perspektive verwendet wird. Daher wird eine zweite Reihe von AnsĂ€tzen vorgeschlagen, bei denen eine Sammlung von 360 Grad verteilten Bildern in die Form, Reflektanz und Beleuchtung gespalten werden. Diese Multi-View-Bilder werden pro Objekt optimiert. Das resultierende Objekt kann direkt in handelsĂŒblicher Rendering-Software oder in Spielen verwendet werden. Wir erreichen dies, indem wir die aktuelle Forschung zu neuronalen Feldern erweitern Reflektanz zu speichern. Durch den Einsatz von Volumen-Rendering-Techniken können wir ein Reflektanzfeld aus natĂŒrlichen Bildsammlungen ohne jegliche Ground Truth (GT) Überwachung optimieren. Die von uns vorgeschlagenen Methoden erreichen eine erstklassige QualitĂ€t der Dekomposition und ermöglichen neuartige Aufnahmesituationen, in denen sich Objekte unter verschiedenen Beleuchtungsbedingungen oder an verschiedenen Orten befinden können, was ĂŒblich fĂŒr Online-Bildsammlungen ist.Creating relightable objects from images or collections is a fundamental challenge in computer vision and graphics. This problem is also known as inverse rendering. One of the main challenges in this task is the high ambiguity. The creation of images from 3D objects is well defined as rendering. However, multiple properties such as shape, illumination, and surface reflectiveness influence each other. Additionally, an integration of these influences is performed to form the final image. Reversing these integrated dependencies is highly ill-posed and ambiguous. However, solving the task is essential, as automated creation of relightable objects has various applications in online shopping, augmented reality (AR), virtual reality (VR), games, or movies. In this thesis, we propose two approaches to solve this task. First, a network architecture is discussed, which generalizes the decomposition of a two-shot capture of an object from large training datasets. The degree of novel view synthesis is limited as only a singular perspective is used in the decomposition. Therefore, the second set of approaches is proposed, which decomposes a set of 360-degree images. These multi-view images are optimized per object, and the result can be directly used in standard rendering software or games. We achieve this by extending recent research on Neural Fields, which can store information in a 3D neural volume. Leveraging volume rendering techniques, we can optimize a reflectance field from in-the-wild image collections without any ground truth (GT) supervision. Our proposed methods achieve state-of-the-art decomposition quality and enable novel capture setups where objects can be under varying illumination or in different locations, which is typical for online image collections

    Model-Free 3D Shape Control of Deformable Objects Using Novel Features Based on Modal Analysis

    Full text link
    Shape control of deformable objects is a challenging and important robotic problem. This paper proposes a model-free controller using novel 3D global deformation features based on modal analysis. Unlike most existing controllers using geometric features, our controller employs a physically-based deformation feature by decoupling 3D global deformation into low-frequency mode shapes. Although modal analysis is widely adopted in computer vision and simulation, it has not been used in robotic deformation control. We develop a new model-free framework for modal-based deformation control under robot manipulation. Physical interpretation of mode shapes enables us to formulate an analytical deformation Jacobian matrix mapping the robot manipulation onto changes of the modal features. In the Jacobian matrix, unknown geometry and physical properties of the object are treated as low-dimensional modal parameters which can be used to linearly parameterize the closed-loop system. Thus, an adaptive controller with proven stability can be designed to deform the object while online estimating the modal parameters. Simulations and experiments are conducted using linear, planar, and solid objects under different settings. The results not only confirm the superior performance of our controller but also demonstrate its advantages over the baseline method.Comment: Accepted by the IEEE Transactions on Robotics. The paper will appear in the IEEE Transactions on Robotics. IEEE copyrigh

    Applications in Monocular Computer Vision using Geometry and Learning : Map Merging, 3D Reconstruction and Detection of Geometric Primitives

    Get PDF
    As the dream of autonomous vehicles moving around in our world comes closer, the problem of robust localization and mapping is essential to solve. In this inherently structured and geometric problem we also want the agents to learn from experience in a data driven fashion. How the modern Neural Network models can be combined with Structure from Motion (SfM) is an interesting research question and this thesis studies some related problems in 3D reconstruction, feature detection, SfM and map merging.In Paper I we study how a Bayesian Neural Network (BNN) performs in Semantic Scene Completion, where the task is to predict a semantic 3D voxel grid for the Field of View of a single RGBD image. We propose an extended task and evaluate the benefits of the BNN when encountering new classes at inference time. It is shown that the BNN outperforms the deterministic baseline.Papers II-­III are about detection of points, lines and planes defining a Room Layout in an RGB image. Due to the repeated textures and homogeneous colours of indoor surfaces it is not ideal to only use point features for Structure from Motion. The idea is to complement the point features by detecting a Wireframe – a connected set of line segments – which marks the intersection of planes in the Room Layout. Paper II concerns a task for detecting a Semantic Room Wireframe and implements a Neural Network model utilizing a Graph Convolutional Network module. The experiments show that the method is more flexible than previous Room Layout Estimation methods and perform better than previous Wireframe Parsing methods. Paper III takes the task closer to Room Layout Estimation by detecting a connected set of semantic polygons in an RGB image. The end­-to-­end trainable model is a combination of a Wireframe Parsing model and a Heterogeneous Graph Neural Network. We show promising results by outperforming state of the art models for Room Layout Estimation using synthetic Wireframe detections. However, the joint Wireframe and Polygon detector requires further research to compete with the state of the art models.In Paper IV we propose minimal solvers for SfM with parallel cylinders. The problem may be reduced to estimating circles in 2D and the paper contributes with theory for the two­view relative motion and two­-circle relative structure problem. Fast solvers are derived and experiments show good performance in both simulation and on real data.Papers V-­VII cover the task of map merging. That is, given a set of individually optimized point clouds with camera poses from a SfM pipeline, how can the solutions be effectively merged without completely re­solving the Structure from Motion problem? Papers V­-VI introduce an effective method for merging and shows the effectiveness through experiments of real and simulated data. Paper VII considers the matching problem for point clouds and proposes minimal solvers that allows for deformation ofeach point cloud. Experiments show that the method robustly matches point clouds with drift in the SfM solution

    Fast fluorescence lifetime imaging and sensing via deep learning

    Get PDF
    Error on title page – year of award is 2023.Fluorescence lifetime imaging microscopy (FLIM) has become a valuable tool in diverse disciplines. This thesis presents deep learning (DL) approaches to addressing two major challenges in FLIM: slow and complex data analysis and the high photon budget for precisely quantifying the fluorescence lifetimes. DL's ability to extract high-dimensional features from data has revolutionized optical and biomedical imaging analysis. This thesis contributes several novel DL FLIM algorithms that significantly expand FLIM's scope. Firstly, a hardware-friendly pixel-wise DL algorithm is proposed for fast FLIM data analysis. The algorithm has a simple architecture yet can effectively resolve multi-exponential decay models. The calculation speed and accuracy outperform conventional methods significantly. Secondly, a DL algorithm is proposed to improve FLIM image spatial resolution, obtaining high-resolution (HR) fluorescence lifetime images from low-resolution (LR) images. A computational framework is developed to generate large-scale semi-synthetic FLIM datasets to address the challenge of the lack of sufficient high-quality FLIM datasets. This algorithm offers a practical approach to obtaining HR FLIM images quickly for FLIM systems. Thirdly, a DL algorithm is developed to analyze FLIM images with only a few photons per pixel, named Few-Photon Fluorescence Lifetime Imaging (FPFLI) algorithm. FPFLI uses spatial correlation and intensity information to robustly estimate the fluorescence lifetime images, pushing this photon budget to a record-low level of only a few photons per pixel. Finally, a time-resolved flow cytometry (TRFC) system is developed by integrating an advanced CMOS single-photon avalanche diode (SPAD) array and a DL processor. The SPAD array, using a parallel light detection scheme, shows an excellent photon-counting throughput. A quantized convolutional neural network (QCNN) algorithm is designed and implemented on a field-programmable gate array as an embedded processor. The processor resolves fluorescence lifetimes against disturbing noise, showing unparalleled high accuracy, fast analysis speed, and low power consumption.Fluorescence lifetime imaging microscopy (FLIM) has become a valuable tool in diverse disciplines. This thesis presents deep learning (DL) approaches to addressing two major challenges in FLIM: slow and complex data analysis and the high photon budget for precisely quantifying the fluorescence lifetimes. DL's ability to extract high-dimensional features from data has revolutionized optical and biomedical imaging analysis. This thesis contributes several novel DL FLIM algorithms that significantly expand FLIM's scope. Firstly, a hardware-friendly pixel-wise DL algorithm is proposed for fast FLIM data analysis. The algorithm has a simple architecture yet can effectively resolve multi-exponential decay models. The calculation speed and accuracy outperform conventional methods significantly. Secondly, a DL algorithm is proposed to improve FLIM image spatial resolution, obtaining high-resolution (HR) fluorescence lifetime images from low-resolution (LR) images. A computational framework is developed to generate large-scale semi-synthetic FLIM datasets to address the challenge of the lack of sufficient high-quality FLIM datasets. This algorithm offers a practical approach to obtaining HR FLIM images quickly for FLIM systems. Thirdly, a DL algorithm is developed to analyze FLIM images with only a few photons per pixel, named Few-Photon Fluorescence Lifetime Imaging (FPFLI) algorithm. FPFLI uses spatial correlation and intensity information to robustly estimate the fluorescence lifetime images, pushing this photon budget to a record-low level of only a few photons per pixel. Finally, a time-resolved flow cytometry (TRFC) system is developed by integrating an advanced CMOS single-photon avalanche diode (SPAD) array and a DL processor. The SPAD array, using a parallel light detection scheme, shows an excellent photon-counting throughput. A quantized convolutional neural network (QCNN) algorithm is designed and implemented on a field-programmable gate array as an embedded processor. The processor resolves fluorescence lifetimes against disturbing noise, showing unparalleled high accuracy, fast analysis speed, and low power consumption

    Collected Papers (Neutrosophics and other topics), Volume XIV

    Get PDF
    This fourteenth volume of Collected Papers is an eclectic tome of 87 papers in Neutrosophics and other fields, such as mathematics, fuzzy sets, intuitionistic fuzzy sets, picture fuzzy sets, information fusion, robotics, statistics, or extenics, comprising 936 pages, published between 2008-2022 in different scientific journals or currently in press, by the author alone or in collaboration with the following 99 co-authors (alphabetically ordered) from 26 countries: Ahmed B. Al-Nafee, Adesina Abdul Akeem Agboola, Akbar Rezaei, Shariful Alam, Marina Alonso, Fran Andujar, Toshinori Asai, Assia Bakali, Azmat Hussain, Daniela Baran, Bijan Davvaz, Bilal Hadjadji, Carlos DĂ­az Bohorquez, Robert N. Boyd, M. Caldas, Cenap Özel, Pankaj Chauhan, Victor Christianto, Salvador Coll, Shyamal Dalapati, Irfan Deli, Balasubramanian Elavarasan, Fahad Alsharari, Yonfei Feng, Daniela GĂźfu, Rafael Rojas GualdrĂłn, Haipeng Wang, Hemant Kumar Gianey, Noel Batista HernĂĄndez, Abdel-Nasser Hussein, Ibrahim M. Hezam, Ilanthenral Kandasamy, W.B. Vasantha Kandasamy, Muthusamy Karthika, Nour Eldeen M. Khalifa, Madad Khan, Kifayat Ullah, Valeri Kroumov, Tapan Kumar Roy, Deepesh Kunwar, Le Thi Nhung, Pedro LĂłpez, Mai Mohamed, Manh Van Vu, Miguel A. Quiroz-MartĂ­nez, Marcel Migdalovici, Kritika Mishra, Mohamed Abdel-Basset, Mohamed Talea, Mohammad Hamidi, Mohammed Alshumrani, Mohamed Loey, Muhammad Akram, Muhammad Shabir, Mumtaz Ali, Nassim Abbas, Munazza Naz, Ngan Thi Roan, Nguyen Xuan Thao, Rishwanth Mani Parimala, Ion Pătrașcu, Surapati Pramanik, Quek Shio Gai, Qiang Guo, Rajab Ali Borzooei, Nimitha Rajesh, JesĂșs Estupiñan Ricardo, Juan Miguel MartĂ­nez Rubio, Saeed Mirvakili, Arsham Borumand Saeid, Saeid Jafari, Said Broumi, Ahmed A. Salama, Nirmala Sawan, Gheorghe Săvoiu, Ganeshsree Selvachandran, Seok-Zun Song, Shahzaib Ashraf, Jayant Singh, Rajesh Singh, Son Hoang Le, Tahir Mahmood, Kenta Takaya, Mirela Teodorescu, Ramalingam Udhayakumar, Maikel Y. Leyva VĂĄzquez, V. Venkateswara Rao, Luige Vlădăreanu, Victor Vlădăreanu, Gabriela Vlădeanu, Michael Voskoglou, Yaser Saber, Yong Deng, You He, Youcef Chibani, Young Bae Jun, Wadei F. Al-Omeri, Hongbo Wang, Zayen Azzouz Omar

    Self-supervised Learning of Monocular Depth from Video

    Get PDF
    Image-based depth estimation as a fundamental problem in computer vision allows for understanding the scene geometry using only cameras. This thesis addresses the specific problem of monocular depth estimation via self-supervised learning from RGB-only videos. Although existing work has shown partial excellent results in benchmark datasets, there remain several vital challenges that limit the use of these algorithms in general scenarios. To summarize, my identified challenges and contributions include: (i) Previous methods predict inconsistent depths over a video, which limits their uses in visual localization and mapping. To this end, I propose a geometry consistency loss that penalizes the multi-view depth misalignment in training, which enables scale-consistent depth estimation at inference time; (ii) Previous methods often diverge or show low-accuracy results when training on handheld camera captured videos. To address the challenge, I analyze the effect of camera motion on depth network gradients, and I propose an auto-rectify network to remove the relative rotation in training image pairs for robust learning; (iii) Previous methods fail to learn reasonable depths from highly dynamic scenes due to the non-rigidity. In this scenario, I propose a novel method, which constrains dynamic regions using an external well-trained depth estimation network and supervises static regions via multi-view losses. Comprehensive quantitative results and rich qualitative results are provided to demonstrate the advantages of the proposed methods over existing alternatives. The codes and pre-trained models have been released at https://github.com/JiawangBianThesis (Ph.D.) -- University of Adelaide, School of Computer Science, 202

    Depth Estimation Using 2D RGB Images

    Get PDF
    Single image depth estimation is an ill-posed problem. That is, it is not mathematically possible to uniquely estimate the 3rd dimension (or depth) from a single 2D image. Hence, additional constraints need to be incorporated in order to regulate the solution space. As a result, in the first part of this dissertation, the idea of constraining the model for more accurate depth estimation by taking advantage of the similarity between the RGB image and the corresponding depth map at the geometric edges of the 3D scene is explored. Although deep learning based methods are very successful in computer vision and handle noise very well, they suffer from poor generalization when the test and train distributions are not close. While, the geometric methods do not have the generalization problem since they benefit from temporal information in an unsupervised manner. They are sensitive to noise, though. At the same time, explicitly modeling of a dynamic scenes as well as flexible objects in traditional computer vision methods is a big challenge. Considering the advantages and disadvantages of each approach, a hybrid method, which benefits from both, is proposed here by extending traditional geometric models’ abilities to handle flexible and dynamic objects in the scene. This is made possible by relaxing geometric computer vision rules from one motion model for some areas of the scene into one for every pixel in the scene. This enables the model to detect even small, flexible, floating debris in a dynamic scene. However, it makes the optimization under-constrained. To change the optimization from under-constrained to over-constrained while maintaining the model’s flexibility, ”moving object detection loss” and ”synchrony loss” are designed. The algorithm is trained in an unsupervised fashion. The primary results are in no way comparable to the current state of the art. Because the training process is so slow, it is difficult to compare it to the current state of the art. Also, the algorithm lacks stability. In addition, the optical flow model is extremely noisy and naive. At the end, some solutions are suggested to address these issues

    Discontinuity-Aware Base-Mesh Modeling of Depth for Scalable Multiview Image Synthesis and Compression

    Full text link
    This thesis is concerned with the challenge of deriving disparity from sparsely communicated depth for performing disparity-compensated view synthesis for compression and rendering of multiview images. The modeling of depth is essential for deducing disparity at view locations where depth is not available and is also critical for visibility reasoning and occlusion handling. This thesis first explores disparity derivation methods and disparity-compensated view synthesis approaches. Investigations reveal the merits of adopting a piece-wise continuous mesh description of depth for deriving disparity at target view locations to enable disparity-compensated backward warping of texture. Visibility information can be reasoned due to the correspondence relationship between views that a mesh model provides, while the connectivity of a mesh model assists in resolving depth occlusion. The recent JPEG 2000 Part-17 extension defines tools for scalable coding of discontinuous media using breakpoint-dependent DWT, where breakpoints describe discontinuity boundary geometry. This thesis proposes a method to efficiently reconstruct depth coded using JPEG 2000 Part-17 as a piece-wise continuous mesh, where discontinuities are driven by the encoded breakpoints. Results show that the proposed mesh can accurately represent decoded depth while its complexity scales along with decoded depth quality. The piece-wise continuous mesh model anchored at a single viewpoint or base-view can be augmented to form a multi-layered structure where the underlying layers carry depth information of regions that are occluded at the base-view. Such a consolidated mesh representation is termed a base-mesh model and can be projected to many viewpoints, to deduce complete disparity fields between any pair of views that are inherently consistent. Experimental results demonstrate the superior performance of the base-mesh model in multiview synthesis and compression compared to other state-of-the-art methods, including the JPEG Pleno light field codec. The proposed base-mesh model departs greatly from conventional pixel-wise or block-wise depth models and their forward depth mapping for deriving disparity ingrained in existing multiview processing systems. When performing disparity-compensated view synthesis, there can be regions for which reference texture is unavailable, and inpainting is required. A new depth-guided texture inpainting algorithm is proposed to restore occluded texture in regions where depth information is either available or can be inferred using the base-mesh model

    Does Monocular Depth Estimation Provide Better Pre-training than Classification for Semantic Segmentation?

    Full text link
    Training a deep neural network for semantic segmentation is labor-intensive, so it is common to pre-train it for a different task, and then fine-tune it with a small annotated dataset. State-of-the-art methods use image classification for pre-training, which introduces uncontrolled biases. We test the hypothesis that depth estimation from unlabeled videos may provide better pre-training. Despite the absence of any semantic information, we argue that estimating scene geometry is closer to the task of semantic segmentation than classifying whole images into semantic classes. Since analytical validation is intractable, we test the hypothesis empirically by introducing a pre-training scheme that yields an improvement of 5.7% mIoU and 4.1% pixel accuracy over classification-based pre-training. While annotation is not needed for pre-training, it is needed for testing the hypothesis. We use the KITTI (outdoor) and NYU-V2 (indoor) benchmarks to that end, and provide an extensive discussion of the benefits and limitations of the proposed scheme in relation to existing unsupervised, self-supervised, and semi-supervised pre-training protocols
    • 

    corecore