1,110 research outputs found

    Enhancing smart environments with mobile robots

    Get PDF
    Sensor networks are becoming popular nowadays in the development of smart environments. Heavily relying on static sensor and actuators, though, such environments usually lacks of versatility regarding the provided services and interaction capabilities. Here we present a framework for smart environments where a service robot is included within the sensor network acting as a mobile sensor and/or actuator. Our framework integrates on-the-shelf technologies to ensure its adaptability to a variety of sensor technologies and robotic software. Two pilot cases are presented as evaluation of our proposal.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Accurate Stereo Visual Odometry with Gamma Distributions

    Get PDF
    Point-based stereo visual odometry systems typically estimate the camera motion by minimizing a cost function of the projection residuals between consecutive frames. Under some mild assumptions, such minimization is equivalent to maximizing the probability of the measured residuals given a certain pose change, for which a suitable model of the error distribution (sensor model) becomes of capital importance in order to obtain accurate results. This paper proposes a robust probabilistic model for projection errors, based on real world data. For that, we argue that projection distances follow Gamma distributions, and hence, the introduction of these models in a probabilistic formulation of the motion estimation process increases both precision and accuracy. Our approach has been validated through a series of experiments with both synthetic and real data, revealing an improvement in accuracy while not increasing the computational burden.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech. Project "PROMOVE: Advances in mobile robotics for promoting independent life of elders", funded by the Spanish Government and the "European Regional Development Fund ERDF" under contract DPI2014-55826-R

    Analyzing interference between RGB-D cameras for human motion tracking

    Get PDF
    Multi-camera RGB-D systems are becoming popular as sensor setups in Computer Vision applications but they are prone to cause interference between them, compromising their accuracy. This paper extends previous works on the analysis of the noise introduced by interference with new and more realistic camera configurations and different brands of devices. As expected, the detected noise increases as distance and angle grows, becoming worse when interference is present. Finally, we evaluate the effectiveness of the proposed solutions of using DC vibration motors to mitigate them. The results of this study are being used to assess the effect of interference when applying these setups to human motion tracking.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech. Plan Propio de Investigación de la UMA. Junta de Andalucía, proyecto TEP2012-53

    Improvement of the sensory and autonomous capability of robots through olfaction: the IRO Project

    Get PDF
    Proyecto de Excelencia Junta de Andalucía TEP2012-530Olfaction is a valuable source of information about the environment that has not been su ciently exploited in mobile robotics yet. Certainly, odor information can contribute to other sensing modalities, e.g. vision, to successfully accomplish high-level robot activities, such as task planning or execution in human environments. This paper describes the developments carried out in the scope of the IRO project, which aims at making progress in this direction by investigating mechanisms that exploit odor information (usually coming in the form of the type of volatile and its concentration) in problems like object recognition and scene-activity understanding. A distinctive aspect of this research is the special attention paid to the role of semantics within the robot perception and decisionmaking processes. The results of the IRO project have improved the robot capabilities in terms of efciency, autonomy and usefulness.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tec

    An Analytical Solution to the IMU Initialization Problem for Visual-Inertial Systems

    Full text link
    The fusion of visual and inertial measurements is becoming more and more popular in the robotics community since both sources of information complement well each other. However, in order to perform this fusion, the biases of the Inertial Measurement Unit (IMU) as well as the direction of gravity must be initialized first. Additionally, in case of a monocular camera, the metric scale is also needed. The most popular visual-inertial initialization approaches rely on accurate vision-only motion estimates to build a non-linear optimization problem that solves for these parameters in an iterative way. In this paper, we rely on the previous work in [1] and propose an analytical solution to estimate the accelerometer bias, the direction of gravity and the scale factor in a maximum-likelihood framework. This formulation results in a very efficient estimation approach and, due to the non-iterative nature of the solution, avoids the intrinsic issues of previous iterative solutions. We present an extensive validation of the proposed IMU initialization approach and a performance comparison against the state-of-the-art approach described in [2] with real data from the publicly available EuRoC dataset, achieving comparable accuracy at a fraction of its computational cost and without requiring an initial guess for the scale factor. We also provide a C++ open source reference implementation

    Experimental Analysis of Appearance Maps as Descriptor Manifolds Approximations

    Get PDF
    Images of a given environment, coded by a holistic image descriptor, produce a manifold that is articulated by the camera pose in such environment. The correct articulation of such Descriptor Manifold (DM) by the camera poses is the cornerstone for precise Appearance-based Localization (AbL), which implies knowing the correspondent descriptor for any given pose of the camera in the environment. Since such correspondences are only given at sample pairs of the DM (the appearance map), some kind of regression must be applied to predict descriptor values at unmapped locations. This is relevant for AbL because this regression process can be exploited as an observation model for the localization task. This paper analyses the influence of a number of parameters involved in the approximation of the DM from the appearance map, including the sampling density, the method employed to regress values at unvisited poses, and the impact of the image content on the DM structure. We present experimental evaluations of diverse setups and propose an image metric based on the image derivatives, which allows us to build appearance maps in the form of grids of variable density. A preliminary use case is presented as an initial step for future research

    Dimensionality Reduction in images for Appearance-based camera Localization

    Get PDF
    Appearance-based Localization (AL) focuses on estimating the pose of a camera from the information encoded in an image, treated holistically. However, the high-dimensionality of images makes this estimation intractable and some techniques of Dimensionality Reduction (DR) must be applied. The resulting reduced image representation, though, must keep underlying information about the structure of the scene to be able to infer the camera pose. This work explores the problem of DR in the context of AL, and evaluates four popular methods in two simple cases on a synthetic environment: two linear (PCA and MDS) and two non-linear, also known as Manifold Learning methods (LLE and Isomap). The evaluation is carried out in terms of their capability to generate lower-dimensional embeddings that maintain underlying information that is isometric to the camera poses.Plan propio UMA, HOUNDBOT (P20 01302), funding by Andalusian Regional Government, and ARPEGGIO (PID2020-117057GB-I00), funded by Spain National Research Agency. Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tec

    Multimethod approach for the detection and characterisation of food-grade synthetic amorphous silica nanoparticles

    Get PDF
    AbstractSynthetic amorphous silica (SAS) has been used as food additive under the code E551 for decades and the agrifood sector is considered a main exposure vector for humans and environment. However, there is still a lack of detailed methodologies for the determination of SAS’ particle size and concentration. This work presents the detection and characterization of NPs in eleven different food-grade SAS samples, following a reasoned and detailed sequential methodology. Dynamic Light Scattering (DLS), Multiangle Light Scattering (MALS), Asymmetric Flow-Field Flow Fractionation (AF4), Inductively Coupled Plasma Mass Spectrometry (ICPMS) and Transmission Electron Microscopy (TEM) were used. The suitability and limitations, information derived from each type of analytical technique and implications related to current EC Regulation 1169/2011 on the provision of food information to consumers are deeply discussed. In general the z-average, AF4 hydrodynamic diameters and root mean square (rms) radii measured were in good agreement. AF4-ICPMS coupling and pre channel calibration with silica NPs standards allowed the reliable detection of NPs below 100nm for ten of eleven samples (AF4 diameters between 20.6 and 39.8nm) and to quantify the mass concentration in seven different samples (at mgL−1 concentration level). TEM characterisation included the determination of the minimum detectable size and subsequent measurement of the equivalent circle diameter (ECD) of primary particles and small aggregates, which were between 10.3 and 20.3nm. Because of the dynamic size application range is limited by the minimum detectable size, all the techniques in this work can be used only as positive tests

    Calibración extrínseca de un conjunto de cámaras RGB-D sobre un robot móvil

    Get PDF
    La aparición de las cáqmaras RGB-D como sensores robóticos de bajo coste ha supuesto la inclusión habitual de varios de estos dispositivos en una creciente cantidad de vehiculos y robots. En estos casos, la calibraci on precisa de las transformaciones espaciales existentes entre las c amaras del mismo robot es de capital importancia a la hora de obtener medidas ables del entorno. Este articulo evalua el metodo de calibracion con formula cerrada descrito en [7] y lo extiende con una propuesta alternativa basada en un m etodo iterativo y una extension robusta de este ultimo en dos escenarios: i) un entorno simulado con cambios en el nivel de ruido de las observaciones, en el numero de observaciones obtenidas, en la proporcion de outliers y en las posiciones relativas de las c amaras, y ii) una con guraci on particular de 3 c amaras RGB- D sobre un robot real. Los resultados de la evaluacion muestran una mayor precision para nuestra propuesta iterativa robusta en todos los escenarios analizados. El codigo fuente de la implementacion de estos metodos en C++ se proporciona para su uso publico.Proyecto PROMOVE:DPI2014-55826-R (MINECO). Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
    • …
    corecore