100 research outputs found
USE OF QUANTILE REGRESSION AND RANSAC ALGORITHM IN FITTING VOLUME EQUATIONS UNDER THE INFLUENCE OF DISCREPANT DATA
The objective of this study was to evaluate three estimation methods to fit volume equations in the presence of influential or leverage data. To do so, data from the forest inventory carried out by the Centro Tecnológico de Minas Gerais Foundation were used to fit the Schumacher and Hall (1933) model in its nonlinear form for Cerradão forest, considering the quantile regression (QR), the RANSAC algorithm and the nonlinear Ordinary Least Squares (OLS) method. The correlation coefficient ( ) between the observed and estimated volumes, root-mean-square error (RMSE), as well as graphical analysis of the dispersion and distribution of the residuals were used as criteria to evaluate the performance of the methods. After the analysis, the nonlinear least squares method presented a slightly better result in terms of the goodness-of-fit statistics, however it altered the expected trend of the fitted curve due to the presence of influential data, which did not happen with the QR and the RANSAC algorithm, as these were more robust in the presence of discrepant data
RANSAC for Robotic Applications: A Survey
Random Sample Consensus, most commonly abbreviated as RANSAC, is a robust estimation method for the parameters of a model contaminated by a sizable percentage of outliers. In its simplest form, the process starts with a sampling of the minimum data needed to perform an estimation, followed by an evaluation of its adequacy, and further repetitions of this process until some stopping criterion is met. Multiple variants have been proposed in which this workflow is modified, typically tweaking one or several of these steps for improvements in computing time or the quality of the estimation of the parameters. RANSAC is widely applied in the field of robotics, for example, for finding geometric shapes (planes, cylinders, spheres, etc.) in cloud points or for estimating the best transformation between different camera views. In this paper, we present a review of the current state of the art of RANSAC family methods with a special interest in applications in robotics.This work has been partially funded by the Basque Government, Spain, under Research Teams Grant number IT1427-22 and under ELKARTEK LANVERSO Grant number KK-2022/00065; the Spanish Ministry of Science (MCIU), the State Research Agency (AEI), the European Regional Development Fund (FEDER), under Grant number PID2021-122402OB-C21 (MCIU/AEI/FEDER, UE); and the Spanish Ministry of Science, Innovation and Universities, under Grant FPU18/04737
Reconstruction and recognition of confusable models using three-dimensional perception
Perception is one of the key topics in robotics research. It is about the processing
of external sensor data and its interpretation. The necessity of fully autonomous
robots makes it crucial to help them to perform tasks more reliably, flexibly, and
efficiently. As these platforms obtain more refined manipulation capabilities, they
also require expressive and comprehensive environment models: for manipulation
and affordance purposes, their models have to involve each one of the objects
present in the world, coincidentally with their location, pose, shape and other aspects.
The aim of this dissertation is to provide a solution to several of these challenges
that arise when meeting the object grasping problem, with the aim of improving
the autonomy of the mobile manipulator robot MANFRED-2. By the analysis
and interpretation of 3D perception, this thesis covers in the first place the
localization of supporting planes in the scenario. As the environment will contain
many other things apart from the planar surface, the problem within cluttered
scenarios has been solved by means of Differential Evolution, which is a particlebased
evolutionary algorithm that evolves in time to the solution that yields the
cost function lowest value.
Since the final purpose of this thesis is to provide with valuable information for
grasping applications, a complete model reconstructor has been developed. The
proposed method holdsmany features such as robustness against abrupt rotations,
multi-dimensional optimization, feature extensibility, compatible with other scan
matching techniques, management of uncertain information and an initialization
process to reduce convergence timings. It has been designed using a evolutionarybased
scan matching optimizer that takes into account surface features of the object,
global form and also texture and color information.
The last tackled challenge regards the recognition problem. In order to procure
with worthy information about the environment to the robot, a meta classifier that discerns efficiently the observed objects has been implemented. It is capable
of distinguishing between confusable objects, such as mugs or dishes with similar
shapes but different size or color.
The contributions presented in this thesis have been fully implemented and
empirically evaluated in the platform. A continuous grasping pipeline covering
from perception to grasp planning including visual object recognition for confusable
objects has been developed. For that purpose, an indoor environment with
several objects on a table is presented in the nearby of the robot. Items are recognized
from a database and, if one is chosen, the robot will calculate how to grasp
it taking into account the kinematic restrictions associated to the anthropomorphic
hand and the 3D model for this particular object. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------La percepción es uno de los temas más relevantes en el mundo de la investigaci
ón en robótica. Su objetivo es procesar e interpretar los datos recibidos por
un sensor externo. La gran necesidad de desarrollar robots autónomos hace imprescindible
proporcionar soluciones que les permita realizar tareas más precisas,
flexibles y eficientes. Dado que estas plataformas cada día adquieren mejores capacidades
para manipular objetos, también necesitarán modelos expresivos y comprensivos:
para realizar tareas de manipulación y prensión, sus modelos han de
tener en cuenta cada uno de los objetos presentes en su entorno, junto con su localizaci
ón, orientación, forma y otros aspectos.
El objeto de la presente tesis doctoral es proponer soluciones a varios de los
retos que surgen al enfrentarse al problema del agarre, con el propósito final de
aumentar la capacidad de autonomía del robot manipulador MANFRED-2. Mediante
el análisis e interpretación de la percepción tridimensional, esta tesis cubre
en primer lugar la localización de planos de soporte en sus alrededores. Dado que
el entorno contendrá muchos otros elementos aparte de la superficie de apoyo buscada, el problema en entornos abarrotados ha sido solucionado mediante Evolución
Diferencial, que es un algoritmo evolutivo basado en partículas que evoluciona
temporalmente a la solución que contempla el menor resultado en la función de
coste.
Puesto que el propósito final de este trabajo de investigación es proveer de información valiosa a las aplicaciones de prensión, se ha desarrollado un reconstructor
de modelos completos. El método propuesto posee diferentes características
como robustez a giros abruptos, optimización multidimensional, extensión a otras
características, compatibilidad con otras técnicas de reconstrucción, manejo de incertidumbres
y un proceso de inicialización para reducir el tiempo de convergencia. Ha sido diseñado usando un registro optimizado mediante técnicas evolutivas
que tienen en cuenta las particularidades de la superficie del objeto, su forma
global y la información relativa a la textura.
El último problema abordado está relacionado con el reconocimiento de objetos. Con la intención de abastecer al robot con la mayor información posible sobre el entorno, se ha implementado un meta clasificador que diferencia de manera eficaz los objetos observados. Ha sido capacitado para distinguir objetos confundibles como tazas o platos con formas similares pero con diferentes colores o tamaños.
Las contribuciones presentes en esta tesis han sido completamente implementadas y probadas de manera empírica en la plataforma. Se ha desarrollado un sistema que cubre el problema de agarre desde la percepción al cálculo de la trayectoria
incluyendo el sistema de reconocimiento de objetos confundibles. Para ello, se ha presentado una mesa con objetos en un entorno cerrado cercano al robot. Los elementos son comparados con una base de datos y si se desea agarrar uno de ellos,
el robot estimará cómo cogerlo teniendo en cuenta las restricciones cinemáticas asociadas a una mano antropomórfica y el modelo tridimensional generado del objeto en cuestión
A coordinated UAV deployment based on stereovision reconnaissance for low risk water assessment
Biologists and management authorities such as the World Health Organisation require monitoring of water pollution for adequate management of aquatic ecosystems. Current water sampling techniques based on human samplers are time consuming, slow and restrictive. This thesis takes advantage of the recent affordability and higher flexibility of Unmanned Aerial Vehicles (UAVs) to provide innovative solutions to the problem. The proposed solution involves having one UAV, “the leader”, equipped with sensors that are capable of accurately estimating the wave height in an aquatic environment, if the region identified by the leader is characterised as having a low wave height, the area is deemed suitable for landing. A second UAV, “the follower UAV”, equipped with a payload such as an Autonomous Underwater Vehicle (AUV) can proceed to the location identified by the leader, land and deploy the AUV into the water body for the purposes of water sampling. The thesis acknowledges there are two main challenges to overcome in order to develop the proposed framework. Firstly, developing a sensor to accurately measure the height of a wave and secondly, achieving cooperative control of two UAVs. Two identical cameras utilising a stereovision approach were developed for capturing three-dimensional information of the wave distribution in a non-invasive manner. As with most innovations, laboratory based testing was necessary before a full-scale implementation can be attempted. Preliminary results indicate that provided a suitable stereo matching algorithm is applied, one can generate a dense 3D reconstruction of the surface to allow estimation of the wave height parameters. Stereo measurements show good agreement with the results obtained from a wave probe in both the time and frequency domain. The mean absolute error for the average wave height and the significant wave height is less than 1cm from the acquired time series data set. A formation-flying algorithm was developed to allow cooperative control between two UAVs. Results show that the follower was able to successfully track the leader’s trajectory and in addition maintain the given separation distance from the leader to within 1m tolerance through the course of the experiments despite windy conditions, low sampling rate and poor accuracy of the GPS sensors. In the closing section of the thesis, near real-time dense 3D reconstruction and wave height estimation from the reconstructed 3D points is demonstrated for an aquatic body using the leader UAV. Results show that for a pair of images taken at a resolution of 320 by 240 pixels up to 21,000 3D points can be generated to provide a dense 3D reconstruction of the water surface within the field of view of the cameras
Robust and large-scale quasiconvex programming in structure-from-motion
Structure-from-Motion (SfM) is a cornerstone of computer vision. Briefly speaking,
SfM is the task of simultaneously estimating the poses of the cameras behind a set of images of a
scene, and the 3D coordinates of the points in the scene.
Often, the optimisation problems that underpin SfM do not have closed-form solutions, and finding
solutions via numerical schemes is necessary. An objective function, which measures the discrepancy
of a geometric object (e.g., camera poses, rotations, 3D coordi- nates) with a set of image
measurements, is to be minimised. Each image measurement gives rise to an error function. For
example, the reprojection error, which measures the distance between an observed image point and
the projection of a 3D point onto the image, is a commonly used error function.
An influential optimisation paradigm in SfM is the ℓ₀₀ paradigm, where the objective function takes
the form of the maximum of all individual error functions (e.g. individual reprojection errors of
scene points). The benefit of the ℓ₀₀ paradigm is that the objective function of many SfM
optimisation problems become quasiconvex, hence there is a unique minimum in the objective
function. The task of formulating and minimising quasiconvex objective functions is called
quasiconvex programming.
Although tremendous progress in SfM techniques under the ℓ₀₀ paradigm has been made, there are still
unsatisfactorily solved problems, specifically, problems associated with large-scale input data and
outliers in the data. This thesis describes novel techniques to
tackle these problems.
A major weakness of the ℓ₀₀ paradigm is its susceptibility to outliers. This thesis improves the
robustness of ℓ₀₀ solutions against outliers by employing the least median of squares (LMS)
criterion, which amounts to minimising the median error. In the context of triangulation, this
thesis proposes a locally convergent robust algorithm underpinned by a novel quasiconvex plane
sweep technique. Imposing the LMS criterion achieves significant outlier tolerance, and, at the
same time, some properties of quasiconvexity greatly simplify the process of solving the LMS
problem.
Approximation is a commonly used technique to tackle large-scale input data. This thesis introduces
the coreset technique to quasiconvex programming problems. The coreset technique aims find a
representative subset of the input data, such that solving the same problem on the subset yields a
solution that is within known bound of the optimal solution on the complete input set. In
particular, this thesis develops a coreset approximate algorithm to handle large-scale
triangulation tasks.
Another technique to handle large-scale input data is to break the optimisation into multiple
smaller sub-problems. Such a decomposition usually speeds up the overall optimisation process,
and alleviates the limitation on memory. This thesis develops a large-scale optimisation algorithm
for the known rotation problem (KRot). The proposed method decomposes the original quasiconvex
programming problem with potentially hundreds of thousands of parameters into multiple sub-problems
with only three parameters each. An efficient solver based on a novel minimum enclosing ball
technique is proposed to solve the sub-problems.Thesis (Ph.D.) (Research by Publication) -- University of Adelaide, School of Computer Science, 201
Face pose estimation with automatic 3D model creation for a driver inattention monitoring application
Texto en inglés y resumen en inglés y españolRecent studies have identified inattention (including distraction and drowsiness) as the main cause of accidents, being responsible of at least 25% of them. Driving distraction has been less studied, since it is more diverse and exhibits a higher risk factor than fatigue. In addition, it is present over half of the inattention involved crashes. The increased presence of In Vehicle Information Systems (IVIS) adds to the potential distraction risk and modifies driving behaviour, and thus research on this issue is of vital importance. Many researchers have been working on different approaches to deal with distraction during driving. Among them, Computer Vision is one of the most common, because it allows for a cost effective and non-invasive driver monitoring and sensing. Using Computer Vision techniques it is possible to evaluate some facial movements that characterise the state of attention of a driver. This thesis presents methods to estimate the face pose and gaze direction of a person in real-time, using a stereo camera as a basic for assessing driver distractions. The methods are completely automatic and user-independent. A set of features in the face are identified at initialisation, and used to create a sparse 3D model of the face. These features are tracked from frame to frame, and the model is augmented to cover parts of the face that may have been occluded before. The algorithm is designed to work in a naturalistic driving simulator, which presents challenging low light conditions. We evaluate several techniques to detect features on the face that can be matched between cameras and tracked with success. Well-known methods such as SURF do not return good results, due to the lack of salient points in the face, as well as the low illumination of the images. We introduce a novel multisize technique, based on Harris corner detector and patch correlation. This technique benefits from the better performance of small patches under rotations and illumination changes, and the more robust correlation of the bigger patches under motion blur. The head rotates in a range of ±90º in the yaw angle, and the appearance of the features change noticeably. To deal with these changes, we implement a new re-registering technique that captures new textures of the features as the face rotates. These new textures are incorporated to the model, which mixes the views of both cameras. The captures are taken at regular angle intervals for rotations in yaw, so that each texture is only used in a range of ±7.5º around the capture angle. Rotations in pitch and roll are handled using affine patch warping. The 3D model created at initialisation can only take features in the frontal part of the face, and some of these may occlude during rotations. The accuracy and robustness of the face tracking depends on the number of visible points, so new points are added to the 3D model when new parts of the face are visible from both cameras. Bundle adjustment is used to reduce the accumulated drift of the 3D reconstruction. We estimate the pose from the position of the features in the images and the 3D model using POSIT or Levenberg-Marquardt. A RANSAC process detects incorrectly tracked points, which are not considered for pose estimation. POSIT is faster, while LM obtains more accurate results. Using the model extension and the re-registering technique, we can accurately estimate the pose in the full head rotation range, with error levels that improve the state of the art. A coarse eye direction is composed with the face pose estimation to obtain the gaze and driver's fixation area, parameter which gives much information about the distraction pattern of the driver. The resulting gaze estimation algorithm proposed in this thesis has been tested on a set of driving experiments directed by a team of psychologists in a naturalistic driving simulator. This simulator mimics conditions present in real driving, including weather changes, manoeuvring and distractions due to IVIS. Professional drivers participated in the tests. The driver?s fixation statistics obtained with the proposed system show how the utilisation of IVIS influences the distraction pattern of the drivers, increasing reaction times and affecting the fixation of attention on the road and the surroundings
Structured Indoor Modeling
In this dissertation, we propose data-driven approaches to reconstruct 3D models for indoor scenes which are represented in a structured way (e.g., a wall is represented by a planar surface and two rooms are connected via the wall). The structured representation of models is more application ready than dense representations (e.g., a point cloud), but poses additional challenges for reconstruction since extracting structures requires high-level understanding about geometries. To address this challenging problem, we explore two common structural regularities of indoor scenes: 1) most indoor structures consist of planar surfaces (planarity), and 2) structural surfaces (e.g., walls and floor) can be represented by a 2D floorplan as a top-down view projection (orthogonality). With breakthroughs in data capturing techniques, we develop automated systems to tackle structured modeling problems, namely piece-wise planar reconstruction and floorplan reconstruction, by learning shape priors (i.e., planarity and orthogonality) from data. With structured representations and production-level quality, the reconstructed models have an immediate impact on many industrial applications
EG-ICE 2021 Workshop on Intelligent Computing in Engineering
The 28th EG-ICE International Workshop 2021 brings together international experts working at the interface between advanced computing and modern engineering challenges. Many engineering tasks require open-world resolutions to support multi-actor collaboration, coping with approximate models, providing effective engineer-computer interaction, search in multi-dimensional solution spaces, accommodating uncertainty, including specialist domain knowledge, performing sensor-data interpretation and dealing with incomplete knowledge. While results from computer science provide much initial support for resolution, adaptation is unavoidable and most importantly, feedback from addressing engineering challenges drives fundamental computer-science research. Competence and knowledge transfer goes both ways
Face pose estimation with automatic 3D model creation for a driver inattention monitoring application
Texto en inglés y resumen en inglés y españolRecent studies have identified inattention (including distraction and drowsiness) as the main cause of accidents, being responsible of at least 25% of them. Driving distraction has been less studied, since it is more diverse and exhibits a higher risk factor than fatigue. In addition, it is present over half of the inattention involved crashes. The increased presence of In Vehicle Information Systems (IVIS) adds to the potential distraction risk and modifies driving behaviour, and thus research on this issue is of vital importance. Many researchers have been working on different approaches to deal with distraction during driving. Among them, Computer Vision is one of the most common, because it allows for a cost effective and non-invasive driver monitoring and sensing. Using Computer Vision techniques it is possible to evaluate some facial movements that characterise the state of attention of a driver. This thesis presents methods to estimate the face pose and gaze direction of a person in real-time, using a stereo camera as a basic for assessing driver distractions. The methods are completely automatic and user-independent. A set of features in the face are identified at initialisation, and used to create a sparse 3D model of the face. These features are tracked from frame to frame, and the model is augmented to cover parts of the face that may have been occluded before. The algorithm is designed to work in a naturalistic driving simulator, which presents challenging low light conditions. We evaluate several techniques to detect features on the face that can be matched between cameras and tracked with success. Well-known methods such as SURF do not return good results, due to the lack of salient points in the face, as well as the low illumination of the images. We introduce a novel multisize technique, based on Harris corner detector and patch correlation. This technique benefits from the better performance of small patches under rotations and illumination changes, and the more robust correlation of the bigger patches under motion blur. The head rotates in a range of ±90º in the yaw angle, and the appearance of the features change noticeably. To deal with these changes, we implement a new re-registering technique that captures new textures of the features as the face rotates. These new textures are incorporated to the model, which mixes the views of both cameras. The captures are taken at regular angle intervals for rotations in yaw, so that each texture is only used in a range of ±7.5º around the capture angle. Rotations in pitch and roll are handled using affine patch warping. The 3D model created at initialisation can only take features in the frontal part of the face, and some of these may occlude during rotations. The accuracy and robustness of the face tracking depends on the number of visible points, so new points are added to the 3D model when new parts of the face are visible from both cameras. Bundle adjustment is used to reduce the accumulated drift of the 3D reconstruction. We estimate the pose from the position of the features in the images and the 3D model using POSIT or Levenberg-Marquardt. A RANSAC process detects incorrectly tracked points, which are not considered for pose estimation. POSIT is faster, while LM obtains more accurate results. Using the model extension and the re-registering technique, we can accurately estimate the pose in the full head rotation range, with error levels that improve the state of the art. A coarse eye direction is composed with the face pose estimation to obtain the gaze and driver's fixation area, parameter which gives much information about the distraction pattern of the driver. The resulting gaze estimation algorithm proposed in this thesis has been tested on a set of driving experiments directed by a team of psychologists in a naturalistic driving simulator. This simulator mimics conditions present in real driving, including weather changes, manoeuvring and distractions due to IVIS. Professional drivers participated in the tests. The driver?s fixation statistics obtained with the proposed system show how the utilisation of IVIS influences the distraction pattern of the drivers, increasing reaction times and affecting the fixation of attention on the road and the surroundings
- …