237 research outputs found
Filtering graphs to check isomorphism and extracting mapping by using the Conductance Electrical Model
© 2016. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/This paper presents a new method of filtering graphs to check exact graph isomorphism and extracting their mapping. Each graph is modeled by a resistive electrical circuit using the Conductance Electrical Model (CEM). By using this model, a necessary condition to check the isomorphism of two graphs is that their equivalent resistances have the same values, but this is not enough, and we have to look for their mapping to find the sufficient condition. We can compute the isomorphism between two graphs in O(N-3), where N is the order of the graph, if their star resistance values are different, otherwise the computational time is exponential, but only with respect to the number of repeated star resistance values, which usually is very small. We can use this technique to filter graphs that are not isomorphic and in case that they are, we can obtain their node mapping. A distinguishing feature over other methods is that, even if there exists repeated star resistance values, we can extract a partial node mapping (of all the nodes except the repeated ones and their neighbors) in O(N-3). The paper presents the method and its application to detect isomorphic graphs in two well know graph databases, where some graphs have more than 600 nodes. (C) 2016 Elsevier Ltd. All rights reserved.Postprint (author's draft
URUS: Ubiquitous networking robotics for urban settings
Presentation of the progress of the European's project URUS: Ubiquitous Networking Robotics for Urban SettingsPeer Reviewe
Learning the hidden human knowledge of UAV pilots when navigating in a cluttered environment for improving path planning
© 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksWe propose in this work a new model of how the hidden human knowledge (HHK) of UAV pilots can be incorporated in the UAVs path planning generation. We intuitively know that human’s pilots barely manage or even attempt to drive the UAV through a path that is optimal attending to some criteria as an optimal planner would suggest. Although human pilots might get close but not reach the optimal path proposed by some planner that optimizes over time or distance, the final effect of this differentiation could be not only surprisingly better, but also desirable. In the best scenario for optimality, the path that human pilots generate would deviate from the optimal path as much as the hidden knowledge that its perceives is injected into the path. The aim of our work is to use real human pilot paths to learn the hidden knowledge using repulsion fields and to incorporate this knowledge afterwards in the environment obstacles as cause of the deviation from optimality. We present a strategy of learning this knowledge based on attractor and repulsors, the learning method and a modified RRT* that can use this knowledge for path planning. Finally we do real-life tests and we compare the resulting paths with and without this knowledge.Accepted versio
Hallucinating dense optical flow from sparse lidar for autonomous vehicles
© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.In this paper we propose a novel approach to estimate dense optical flow from sparse lidar data acquired on an autonomous vehicle. This is intended to be used as a drop-in replacement of any image-based optical flow system when images are not reliable due to e.g. adverse weather conditions or at night. In order to infer high resolution 2D flows from discrete range data we devise a three-block architecture of multiscale filters that combines multiple intermediate objectives, both in the lidar and image domain. To train this network we introduce a dataset with approximately 20K lidar samples of the Kitti dataset which we have augmented with a pseudo ground-truth image-based optical flow computed using FlowNet2. We demonstrate the effectiveness of our approach on Kitti, and show that despite using the low-resolution and sparse measurements of the lidar, we can regress dense optical flow maps which are at par with those estimated with image-based methods.Peer ReviewedPostprint (author's final draft
Precise localization for aerial inspection using augmented reality markers
The final publication is available at link.springer.comThis chapter is devoted to explaining a method for precise localization using augmented reality markers. This method can achieve precision of less of 5 mm in position at a distance of 0.7 m, using a visual mark of 17 mm × 17 mm, and it can be used by controller when the aerial robot is doing a manipulation task. The localization method is based on optimizing the alignment of deformable contours from textureless images working from the raw vertexes of the observed contour. The algorithm optimizes the alignment of the XOR area computed by means of computer graphics clipping techniques. The method can run at 25 frames per second.Peer ReviewedPostprint (author's final draft
Cooperative robots in people guidance mission: DTM model validation and local optimization motion
This work presents a novel approach for optimizing locally the work of cooperative robots and obtaining the minimum displacement of humans in a guiding people mission. This problem is addressed by introducing a “Discrete Time Motion” model (DTM) and a new cost function that minimizes the work required by robots for leading and regrouping people. Furthermore, an analysis
of forces actuating among robots and humans is presented throughout simulations of different situations of robot and human configurations and behaviors. Finally, we describe the process of modeling and validation by simulation that have been used to explore the new possibilities of interaction when humans are guided by teams of robots that work cooperatively in urban areas.Peer ReviewedPostprint (published version
Online learning and detection of faces with low human supervision
The final publication is available at link.springer.comWe present an efficient,online,and interactive approach for computing a classifier, called Wild Lady Ferns (WiLFs), for face learning and detection using small human supervision. More precisely, on the one hand, WiLFs combine online boosting and extremely randomized trees (Random Ferns) to compute progressively an efficient and discriminative classifier. On the other hand, WiLFs use an interactive human-machine approach that combines two complementary learning strategies to reduce considerably the degree of human supervision during learning. While the first strategy corresponds to query-by-boosting active learning, that requests human assistance over difficult samples in function of the classifier confidence, the second strategy refers to a memory-based learning which uses ¿ Exemplar-based Nearest Neighbors (¿ENN) to assist automatically the classifier. A pre-trained Convolutional Neural Network (CNN) is used to perform ¿ENN with high-level feature descriptors. The proposed approach is therefore fast (WilFs run in 1 FPS using a code not fully optimized), accurate (we obtain detection rates over 82% in complex datasets), and labor-saving (human assistance percentages of less than 20%).
As a byproduct, we demonstrate that WiLFs also perform semi-automatic annotation during learning, as while the classifier is being computed, WiLFs are discovering faces instances in input images which are used subsequently for training online the classifier. The advantages of our approach are demonstrated in synthetic and publicly available databases, showing comparable detection rates as offline approaches that require larger amounts of handmade training data.Peer ReviewedPostprint (author's final draft
Anticipatory kinodynamic motion planner for computing the best path and velocity trajectory in autonomous driving
This paper presents an approach, using an anticipatory kinodynamic motion planner, for obtaining the best trajectory and velocity profile for autonomous driving in dynamic complex environments, such as driving in urban scenarios. The planner discretizes the road search space and looks for the best vehicle path and velocity profile at each control period of time, assuming that the static and dynamic objects have been detected. The main contributions of the work are in the anticipatory kinodynamic motion planner, in a fast method for obtaining the -splines for path generation, and in a method to compute and select the best velocity profile at each candidate path that fulfills the vehicle kinodynamic constraints, taking into account the passenger comfort. The method has been developed and tested in MATLAB through a set of simulations in different representative scenarios, involving fixed obstacles and moving vehicles. The outcome of the simulations shows that the anticipatory kinodynamic planner performs correctly in diverse dynamic scenarios, maintaining smooth accelerations for passenger comfortPeer ReviewedPostprint (author's final draft
Bayesian human motion intentionality prediction in urban environments
Human motion prediction in indoor and outdoor scenarios is a key issue towards human robot interaction and intelligent robot navigation in general. In the present work, we propose a new human motion intentionality indicator, denominated Bayesian Human Motion Intentionality Prediction (BHMIP), which is a geometric-based long-term predictor. Two variants of the Bayesian approach are proposed, the Sliding Window BHMIP and the Time Decay BHMIP. The main advantages of the proposed methods are: a simple formulation, easily scalable, portability to unknown environments with small learning effort, low computational complexity, and they outperform other state of the art approaches. The system only requires training to obtain the set of destinations, which are salient positions people normally walk to, that configure a scene. A comparison of the BHMIP is done with other well known methods for long-term prediction using the Edinburgh Informatics Forum pedestrian database and the Freiburg People Tracker database. (C) 2013 Elsevier B.V. All rights reserved.Peer ReviewedPostprint (published version
Planar PØP: feature-less pose estimation with applications in UAV localization
© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.We present a featureless pose estimation method that, in contrast to current Perspective-n-Point (PnP) approaches, it does not require n point correspondences to obtain the camera pose, allowing for pose estimation from natural shapes that do not necessarily have distinguished features like corners or intersecting edges. Instead of using n correspondences (e.g. extracted with a feature detector) we will use the raw polygonal representation of the observed shape and directly estimate the pose in the pose-space of the camera. This method compared with a general PnP method, does not require n point correspondences neither a priori knowledge of the object model (except the scale), which is registered with a picture taken from a known robot pose. Moreover, we achieve higher precision because all the information of the shape contour is used to minimize the area between the projected and the observed shape contours. To emphasize the non-use of n point correspondences between the projected template and observed contour shape, we call the method Planar PØP. The method is shown both in simulation and in a real application consisting on a UAV localization where comparisons with a precise ground-truth are provided.Peer ReviewedPostprint (author's final draft
- …