216 research outputs found

    RANSAC for Robotic Applications: A Survey

    Get PDF
    Random Sample Consensus, most commonly abbreviated as RANSAC, is a robust estimation method for the parameters of a model contaminated by a sizable percentage of outliers. In its simplest form, the process starts with a sampling of the minimum data needed to perform an estimation, followed by an evaluation of its adequacy, and further repetitions of this process until some stopping criterion is met. Multiple variants have been proposed in which this workflow is modified, typically tweaking one or several of these steps for improvements in computing time or the quality of the estimation of the parameters. RANSAC is widely applied in the field of robotics, for example, for finding geometric shapes (planes, cylinders, spheres, etc.) in cloud points or for estimating the best transformation between different camera views. In this paper, we present a review of the current state of the art of RANSAC family methods with a special interest in applications in robotics.This work has been partially funded by the Basque Government, Spain, under Research Teams Grant number IT1427-22 and under ELKARTEK LANVERSO Grant number KK-2022/00065; the Spanish Ministry of Science (MCIU), the State Research Agency (AEI), the European Regional Development Fund (FEDER), under Grant number PID2021-122402OB-C21 (MCIU/AEI/FEDER, UE); and the Spanish Ministry of Science, Innovation and Universities, under Grant FPU18/04737

    DEVELOPMENT OF AN AUTONOMOUS NAVIGATION SYSTEM FOR THE SHUTTLE CAR IN UNDERGROUND ROOM & PILLAR COAL MINES

    Get PDF
    In recent years, autonomous solutions in the multi-disciplinary field of the mining engineering have been an extremely popular applied research topic. The growing demand for mineral supplies combined with the steady decline in the available surface reserves has driven the mining industry to mine deeper underground deposits. These deposits are difficult to access, and the environment may be hazardous to mine personnel (e.g., increased heat, difficult ventilation conditions, etc.). Moreover, current mining methods expose the miners to numerous occupational hazards such as working in the proximity of heavy mining equipment, possible roof falls, as well as noise and dust. As a result, the mining industry, in its efforts to modernize and advance its methods and techniques, is one of the many industries that has turned to autonomous systems. Vehicle automation in such complex working environments can play a critical role in improving worker safety and mine productivity. One of the most time-consuming tasks of the mining cycle is the transportation of the extracted ore from the face to the main haulage facility or to surface processing facilities. Although conveyor belts have long been the autonomous transportation means of choice, there are still many cases where a discrete transportation system is needed to transport materials from the face to the main haulage system. The current dissertation presents the development of a navigation system for an autonomous shuttle car (ASC) in underground room and pillar coal mines. By introducing autonomous shuttle cars, the operator can be relocated from the dusty, noisy, and potentially dangerous environment of the underground mine to the safer location of a control room. This dissertation focuses on the development and testing of an autonomous navigation system for an underground room and pillar coal mine. A simplified relative localization system which determines the location of the vehicle relatively to salient features derived from on-board 2D LiDAR scans was developed for a semi-autonomous laboratory-scale shuttle car prototype. This simplified relative localization system is heavily dependent on and at the same time leverages the room and pillar geometry. Instead of keeping track of a global position of the vehicle relatively to a fixed coordinates frame, the proposed custom localization technique requires information regarding only the immediate surroundings. The followed approach enables the prototype to navigate around the pillars in real-time using a deterministic Finite-State Machine which models the behavior of the vehicle in the room and pillar mine with only a few states. Also, a user centered GUI has been developed that allows for a human user to control and monitor the autonomous vehicle by implementing the proposed navigation system. Experimental tests have been conducted in a mock mine in order to evaluate the performance of the developed system. A number of different scenarios simulating common missions that a shuttle car needs to undertake in a room and pillar mine. The results show a minimum success ratio of 70%

    Visionary Ophthalmics: Confluence of Computer Vision and Deep Learning for Ophthalmology

    Get PDF
    Ophthalmology is a medical field ripe with opportunities for meaningful application of computer vision algorithms. The field utilizes data from multiple disparate imaging techniques, ranging from conventional cameras to tomography, comprising a diverse set of computer vision challenges. Computer vision has a rich history of techniques that can adequately meet many of these challenges. However, the field has undergone something of a revolution in recent times as deep learning techniques have sprung into the forefront following advances in GPU hardware. This development raises important questions regarding how to best leverage insights from both modern deep learning approaches and more classical computer vision approaches for a given problem. In this dissertation, we tackle challenging computer vision problems in ophthalmology using methods all across this spectrum. Perhaps our most significant work is a highly successful iris registration algorithm for use in laser eye surgery. This algorithm relies on matching features extracted from the structure tensor and a Gabor wavelet – a classically driven approach that does not utilize modern machine learning. However, drawing on insight from the deep learning revolution, we demonstrate successful application of backpropagation to optimize the registration significantly faster than the alternative of relying on finite differences. Towards the other end of the spectrum, we also present a novel framework for improving RANSAC segmentation algorithms by utilizing a convolutional neural network (CNN) trained on a RANSAC-based loss function. Finally, we apply state-of-the-art deep learning methods to solve the problem of pathological fluid detection in optical coherence tomography images of the human retina, using a novel retina-specific data augmentation technique to greatly expand the data set. Altogether, our work demonstrates benefits of applying a holistic view of computer vision, which leverages deep learning and associated insights without neglecting techniques and insights from the previous era

    On the sample consensus robust estimation paradigm: comprehensive survey and novel algorithms with applications.

    Get PDF
    Master of Science in Statistics and Computer Science.University of KwaZulu-Natal, Durban 2016.This study begins with a comprehensive survey of existing variants of the Random Sample Consensus (RANSAC) algorithm. Then, five new ones are contributed. RANSAC, arguably the most popular robust estimation algorithm in computer vision, has limitations in accuracy, efficiency and repeatability. Research into techniques for overcoming these drawbacks, has been active for about two decades. In the last one-and-half decade, nearly every single year had at least one variant published: more than ten, in the last two years. However, many existing variants compromise two attractive properties of the original RANSAC: simplicity and generality. Some introduce new operations, resulting in loss of simplicity, while many of those that do not introduce new operations, require problem-specific priors. In this way, they trade off generality and introduce some complexity, as well as dependence on other steps of the workflow of applications. Noting that these observations may explain the persisting trend, of finding only the older, simpler variants in ‘mainstream’ computer vision software libraries, this work adopts an approach that preserves the two mentioned properties. Modification of the original algorithm, is restricted to only search strategy replacement, since many drawbacks of RANSAC are consequences of the search strategy it adopts. A second constraint, serving the purpose of preserving generality, is that this ‘ideal’ strategy, must require no problem-specific priors. Such a strategy is developed, and reported in this dissertation. Another limitation, yet to be overcome in literature, but is successfully addressed in this study, is the inherent variability, in RANSAC. A few theoretical discoveries are presented, providing insights on the generic robust estimation problem. Notably, a theorem proposed as an original contribution of this research, reveals insights, that are foundational to newly proposed algorithms. Experiments on both generic and computer-vision-specific data, show that all proposed algorithms, are generally more accurate and more consistent, than RANSAC. Moreover, they are simpler in the sense that, they do not require some of the input parameters of RANSAC. Interestingly, although non-exhaustive in search like the typical RANSAC-like algorithms, three of these new algorithms, exhibit absolute non-randomness, a property that is not claimed by any existing variant. One of the proposed algorithms, is fully automatic, eliminating all requirements of user-supplied input parameters. Two of the proposed algorithms, are implemented as contributed alternatives to the homography estimation function, provided in MATLAB’s computer vision toolbox, after being shown to improve on the performance of M-estimator Sample Consensus (MSAC). MSAC has been the choice in all releases of the toolbox, including the latest 2015b. While this research is motivated by computer vision applications, the proposed algorithms, being generic, can be applied to any model-fitting problem from other scientific fields

    Object Tracking

    Get PDF
    Object tracking consists in estimation of trajectory of moving objects in the sequence of images. Automation of the computer object tracking is a difficult task. Dynamics of multiple parameters changes representing features and motion of the objects, and temporary partial or full occlusion of the tracked objects have to be considered. This monograph presents the development of object tracking algorithms, methods and systems. Both, state of the art of object tracking methods and also the new trends in research are described in this book. Fourteen chapters are split into two sections. Section 1 presents new theoretical ideas whereas Section 2 presents real-life applications. Despite the variety of topics contained in this monograph it constitutes a consisted knowledge in the field of computer object tracking. The intention of editor was to follow up the very quick progress in the developing of methods as well as extension of the application

    Fault-Tolerant Vision for Vehicle Guidance in Agriculture

    Get PDF

    3D Reconstruction for Post-Disaster Analysis of Civil Infrastructure

    Get PDF
    Following a natural disaster, there is a need to rapidly assess the safety of civil infrastructure. This job is typically undertaken by local governments with the help of volunteer civil engineers with structural expertise, as well as organizations such as Cal EMA and ASCE. However, the inspection process is labor-intensive and a bit tedious, and results are prone to error as they tend to rely on the individual expertise of the inspectors. 3D reconstruction stands to become a useful tool in the safety evaluation process, as well as providing valuable opportunities for forensic engineering. This research explores the application of a low cost, rapidly deployable 3D reconstruction system for post-natural disaster documentation and analysis of civil infrastructure. A review of the process of 3D reconstruction was conducted. Likewise, a review of existing technology used for disaster scene analysis was performed. Two potentially useful 3D reconstruction toolkits were examined: FIT3D and Autodesk 123D Catch, of which the latter was determined to be best suited for the task at hand. Experiments were conducted to determine the best methodology for producing accurate 3D models as well as calculating the inherent error in the model. It was found that measurements obtained from the 3D model were accurate within approximately 0.3 inches

    Ultrasound shear wave imaging for diagnosis of nonalcoholic fatty liver disease

    Full text link
    Pour le diagnostic et la stratification de la fibrose hépatique, la rigidité du foie est un biomarqueur quantitatif estimé par des méthodes d'élastographie. L'élastographie par ondes de cisaillement (« shear wave », SW) utilise des ultrasons médicaux non invasifs pour évaluer les propriétés mécaniques du foie sur la base des propriétés de propagation des ondes de cisaillement. La vitesse des ondes de cisaillement (« shear wave speed », SWS) et l'atténuation des ondes de cisaillement (« shear wave attenuation », SWA) peuvent fournir une estimation de la viscoélasticité des tissus. Les tissus biologiques sont intrinsèquement viscoélastiques et un modèle mathématique complexe est généralement nécessaire pour calculer la viscoélasticité en imagerie SW. Le calcul précis de l'atténuation est essentiel, en particulier pour une estimation précise du module de perte et de la viscosité. Des études récentes ont tenté d'augmenter la précision de l'estimation du SWA, mais elles présentent encore certaines limites. Comme premier objectif de cette thèse, une méthode de décalage de fréquence revisitée a été développée pour améliorer les estimations fournies par la méthode originale de décalage en fréquence [Bernard et al 2017]. Dans la nouvelle méthode, l'hypothèse d'un paramètre de forme décrivant les caractéristiques spectrales des ondes de cisaillement, et assumé initialement constant pour tous les emplacements latéraux, a été abandonnée permettant un meilleur ajustement de la fonction gamma du spectre d'amplitude. En second lieu, un algorithme de consensus d'échantillons aléatoires adaptatifs (« adaptive random sample consensus », A-RANSAC) a été mis en œuvre pour estimer la pente du paramètre de taux variable de la distribution gamma afin d’améliorer la précision de la méthode. Pour valider ces changements algorithmiques, la méthode proposée a été comparée à trois méthodes récentes permettant d’estimer également l’atténuation des ondes de cisaillements (méthodes de décalage en fréquence, de décalage en fréquence en deux points et une méthode ayant comme acronyme anglophone AMUSE) à l'aide de données de simulations ou fantômes numériques. Également, des fantômes de gels homogènes in vitro et des données in vivo acquises sur le foie de canards ont été traités. Comme deuxième objectif, cette thèse porte également sur le diagnostic précoce de la stéatose hépatique non alcoolique (NAFLD) qui est nécessaire pour prévenir sa progression et réduire la mortalité globale. À cet effet, la méthode de décalage en fréquence revisitée a été testée sur des foies humains in vivo. La performance diagnostique de la nouvelle méthode a été étudiée sur des foies humains sains et atteints de la maladie du foie gras non alcoolique. Pour minimiser les sources de variabilité, une méthode d'analyse automatisée faisant la moyenne des mesures prises sous plusieurs angles a été mise au point. Les résultats de cette méthode ont été comparés à la fraction de graisse à densité de protons obtenue de l'imagerie par résonance magnétique (« magnetic resonance imaging proton density fat fraction », MRI-PDFF) et à la biopsie du foie. En outre, l’imagerie SWA a été utilisée pour classer la stéatose et des seuils de décision ont été établis pour la dichotomisation des différents grades de stéatose. Finalement, le dernier objectif de la thèse consiste en une étude de reproductibilité de six paramètres basés sur la technologie SW (vitesse, atténuation, dispersion, module de Young, viscosité et module de cisaillement). Cette étude a été réalisée chez des volontaires sains et des patients atteints de NAFLD à partir de données acquises lors de deux visites distinctes. En conclusion, une méthode robuste de calcul du SWA du foie a été développée et validée pour fournir une méthode de diagnostic de la NAFLD.For diagnosis and staging of liver fibrosis, liver stiffness is a quantitative biomarker estimated by elastography methods. Ultrasound shear wave (SW) elastography utilizes noninvasive medical ultrasound to assess the mechanical properties of the liver based on the monitoring of the SW propagation. SW speed (SWS) and SW attenuation (SWA) can provide an estimation of tissue viscoelasticity. Biological tissues are inherently viscoelastic in nature and a complex mathematical model is usually required to compute viscoelasticity in SW imaging. Accurate computation of attenuation is critical, especially for accurate loss modulus and viscosity estimation. Recent studies have made attempts to increase the precision of SWA estimation, but they still face some limitations. As a first objective of this thesis, a revisited frequency-shift method was developed to improve the estimates provided by the original implementation of the frequency-shift method [Bernard et al 2017]. In the new method, the assumption of a constant shape parameter of the gamma function describing the SW magnitude spectrum has been dropped for all lateral locations, allowing a better gamma fitting. Secondly, an adaptive random sample consensus algorithm (A-RANSAC) was implemented to estimate the slope of the varying rate parameter of the gamma distribution to improve the accuracy of the method. For the validation of these algorithmic changes, the proposed method was compared with three recent methods proposed to estimate SWA (frequency-shift, two-point frequency-shift and AMUSE methods) using simulation data or numerical phantoms. In addition, in vitro homogenous gel phantoms and in vivo animal (duck) liver data were processed. As a second objective, this thesis also aimed at improving the early diagnosis of nonalcoholic fatty liver disease (NAFLD), which is necessary to prevent its progression and decrease the overall mortality. For this purpose, the revisited frequency-shift method was tested on in vivo human livers. The new method's diagnosis performance was investigated with healthy and NAFLD human livers. To minimize sources of variability, an automated analysis method averaging measurements from several angles has been developed. The results of this method were compared to the magnetic resonance imaging proton density fat fraction (MRI-PDFF) and to liver biopsy. SWA imaging was used for grading steatosis and cut-off decision thresholds were established for dichotomization of different steatosis grades. As a third objective, this thesis is proposing a reproducibility study of six SW-based parameters (speed, attenuation, dispersion, Young’s modulus, viscosity and shear modulus). The assessment was performed in healthy volunteers and NAFLD patients using data acquired at two separate visits. In conclusion, a robust method for computing the liver’s SWA was developed and validated to provide a diagnostic method for NAFLD
    • …
    corecore