7 research outputs found

    A New Three Object Triangulation Algorithm for Mobile Robot Positioning

    Full text link
    Positioning is a fundamental issue in mobile robot applications. It can be achieved in many ways. Among them, triangulation based on angles measured with the help of beacons is a proven technique. Most of the many triangulation algorithms proposed so far have major limitations. For example, some of them need a particular beacon ordering, have blind spots, or only work within the triangle defined by the three beacons. More reliable methods exist; however, they have an increasing complexity or they require to handle certain spatial arrangements separately. In this paper, we present a simple and new three object triangulation algorithm, named ToTal, that natively works in the whole plane, and for any beacon ordering. We also provide a comprehensive comparison between many algorithms, and show that our algorithm is faster and simpler than comparable algorithms. In addition to its inherent efficiency, our algorithm provides a very useful and unique reliability measure, assessable anywhere in the plane, which can be used to identify pathological cases, or as a validation gate in Kalman filters.Peer reviewe

    Intelligent Behavioral Action Aiding for Improved Autonomous Image Navigation

    Get PDF
    In egomotion image navigation, errors are common especially when traversing areas with few landmarks. Since image navigation is often used as a passive navigation technique in Global Positioning System (GPS) denied environments; egomotion accuracy is important for precise navigation in these challenging environments. One of the causes of egomotion errors is inaccurate landmark distance measurements, e.g., sensor noise. This research determines a landmark location egomotion error model that quantifies the effects of landmark locations on egomotion value uncertainty and errors. The error model accounts for increases in landmark uncertainty due to landmark distance and image centrality. A robot then uses the error model to actively orient to position landmarks in image positions that give the least egomotion calculation uncertainty. Two actions aiding solutions are proposed: (1) qualitative non-evaluative aiding action, and (2) quantitative evaluative aiding action with landmark tracking. Simulation results show that both action aiding techniques reduce the position uncertainty compared to no action aiding. Physical testing results substantiate simulation results. Compared to no action aiding, non-evaluative action aiding reduced egomotion position errors by an average 31.5%, while evaluative action aiding reduced egomotion position errors by an average 72.5%. Physical testing also showed that evaluative action aiding enables egomotion to work reliably in areas with few features, achieving 76% egomotion position error reduction compared to no aiding

    Détection et suivi d'objets par vision fondés sur segmentation par contour actif base région

    Get PDF
    La segmentation et le suivi d'objets sont des domaines de recherche compétitifs dans la vision par ordinateur. Une de leurs applications importantes réside dans la robotique où la capacité à segmenter un objet d'intérêt du fond de l'image, d'une manière précise, est cruciale particulièrement dans des images acquises à bord durant le mouvement du robot. Segmenter un objet dans une image est une opération qui consiste à distinguer la région objet de celle du fond suivant un critère défini. Suivre un objet dans une séquence d'images est une opération qui consiste à localiser la région objet au fil du temps dans une vidéo. Plusieurs techniques peuvent être utilisées afin d'assurer ces opérations. Dans cette thèse, nous nous sommes intéressés à segmenter et suivre des objets en utilisant la méthode du contour actif en raison de sa robustesse et son efficacité à pouvoir segmenter et suivre des objets non rigides. Cette méthode consiste à faire évoluer une courbe à partir d'une position initiale, entourant l'objet à détecter, vers la position de convergence qui correspond aux bords de cet objet d'intérêt. Nous utilisons des critères qui dépendent des régions de l'image ce qui peut imposer certaines contraintes sur les caractéristiques de ces régions comme une hypothèse d'homogénéité. Cette hypothèse ne peut pas être toujours vérifiée du fait de l'hétérogénéité souvent présente dans les images. Dans le but de prendre en compte l'hétérogénéité qui peut apparaître soit sur l'objet d'intérêt soit sur le fond dans des images bruitées et avec une initialisation inadéquate du contour actif, nous proposons une technique qui combine des statistiques locales et globales pour définir le critère de segmentation. En utilisant un rayon de taille fixe, un demi-disque est superposé sur chaque point du contour actif afin de définir les régions d'extraction locale. Lorsque l'hétérogénéité se présente à la fois sur l'objet d'intérêt et sur le fond de l'image, nous développons une technique basée sur un rayon flexible déterminant deux demi-disques avec deux rayons de valeurs différentes pour extraire l'information locale. Le choix de la valeur des deux rayons est déterminé en prenant en considération la taille de l'objet à segmenter ainsi que de la distance séparant l'objet d'intérêt de ses voisins. Enfin, pour suivre un objet mobile dans une séquence vidéo en utilisant la méthode du contour actif, nous développons une approche hybride du suivi d'objet basée sur les caractéristiques de la région et sur le vecteur mouvement des points d'intérêt extraits dans la région objet. En utilisant notre approche, le contour actif initial à chaque image sera ajusté suffisamment d'une façon à ce qu'il soit le plus proche possible au bord réel de l'objet d'intérêt, ainsi l'évolution du contour actif basée sur les caractéristiques de la région ne sera pas piégée par de faux contours. Des résultats de simulations sur des images synthétiques et réelles valident l'efficacité des approches proposées.Object segmentation and tracking is a challenging area of ongoing research in computer vision. One important application lies in robotics where the ability to accurately segment an object of interest from its background is crucial and particularly on images acquired onboard during robot motion. Object segmentation technique consists in separating the object region from the image background according to a pre-defined criterion. Object tracking is a process of determining the positions of moving objects in image sequences. Several techniques can be applied to ensure these operations. In this thesis, we are interested to segment and track objects in video sequences using active contour method due to its robustness and efficiency to segment and track non-rigid objects. Active contour method consists in making a curve converge from an initial position around the object to be detected towards this object boundary according to a pre-defined criterion. We employ criteria which depend on the image regions what may impose certain constraints on the characteristics of these regions as a homogeneity assumption. This assumption may not always be verified due to the heterogeneity often present in images. In order to cope with the heterogeneity that may appear either in the object of interest or in the image background in noisy images using an inadequate active contour initialization, we propose a technique that combines local and global statistics in order to compute the segmentation criterion. By using a radius with a fixed size, a half-disk is superposed on each point of the active contour to define the local extraction regions. However, when the heterogeneity appears on both the object of interest and the image background, we develop a new technique based on a flexible radius that defines two half-disks with two different radius values to extract the local information. The choice of the value of these two radii is determined by taking into consideration the object size as well as the distance separating the object of interest from its neighbors. Finally, to track a mobile object within a video sequence using the active contour method, we develop a hybrid object tracking approach based on region characteristics and on motion vector of interest points extracted on the object region. Using our approach, the initial active contour for each image will be adequately adjusted in a way that it will be as close as possible to the actual boundary of the object of interest so that the evolution of active contour based on characteristics of the region will not be trapped by false contours. Simulation results on synthetic and real images validate the effectiveness of the proposed approaches

    Collaborative autonomy in heterogeneous multi-robot systems

    Get PDF
    As autonomous mobile robots become increasingly connected and widely deployed in different domains, managing multiple robots and their interaction is key to the future of ubiquitous autonomous systems. Indeed, robots are not individual entities anymore. Instead, many robots today are deployed as part of larger fleets or in teams. The benefits of multirobot collaboration, specially in heterogeneous groups, are multiple. Significantly higher degrees of situational awareness and understanding of their environment can be achieved when robots with different operational capabilities are deployed together. Examples of this include the Perseverance rover and the Ingenuity helicopter that NASA has deployed in Mars, or the highly heterogeneous robot teams that explored caves and other complex environments during the last DARPA Sub-T competition. This thesis delves into the wide topic of collaborative autonomy in multi-robot systems, encompassing some of the key elements required for achieving robust collaboration: solving collaborative decision-making problems; securing their operation, management and interaction; providing means for autonomous coordination in space and accurate global or relative state estimation; and achieving collaborative situational awareness through distributed perception and cooperative planning. The thesis covers novel formation control algorithms, and new ways to achieve accurate absolute or relative localization within multi-robot systems. It also explores the potential of distributed ledger technologies as an underlying framework to achieve collaborative decision-making in distributed robotic systems. Throughout the thesis, I introduce novel approaches to utilizing cryptographic elements and blockchain technology for securing the operation of autonomous robots, showing that sensor data and mission instructions can be validated in an end-to-end manner. I then shift the focus to localization and coordination, studying ultra-wideband (UWB) radios and their potential. I show how UWB-based ranging and localization can enable aerial robots to operate in GNSS-denied environments, with a study of the constraints and limitations. I also study the potential of UWB-based relative localization between aerial and ground robots for more accurate positioning in areas where GNSS signals degrade. In terms of coordination, I introduce two new algorithms for formation control that require zero to minimal communication, if enough degree of awareness of neighbor robots is available. These algorithms are validated in simulation and real-world experiments. The thesis concludes with the integration of a new approach to cooperative path planning algorithms and UWB-based relative localization for dense scene reconstruction using lidar and vision sensors in ground and aerial robots

    Vision dynamique pour la navigation d'un robot mobile

    Get PDF
    Les travaux présentés dans cette thèse concernent l’étude des fonctionnalités visuelles sur des scènes dynamiques et ses applications à la robotique mobile. Ces fonctionnalités visuelles traitent plus précisément du suivi visuel d’objets dans des séquences d’images. Quatre méthodes de suivi visuel ont été étudiées, dont trois ont été développées spécifiquement dans le cadre de cette thèse. Ces méthodes sont : (1) le suivi de contours par un snake, avec deux variantes permettant son application à des séquences d’images couleur ou la prise en compte de contraintes sur la forme de l’objet suivi, (2) le suivi de régions par différences de motifs, (3) le suivi de contours par corrélation 1D, et enfin (4) la méthode de suivi d’un ensemble de points, fondée sur la distance de Hausdorff, développée lors d’une thèse précédente. Ces méthodes ont été analysées pour différentes tâches relatives à la navigation d’un robot mobile; une comparaison dans différents contextes a été effectuée, donnant lieu à une caractérisation des cibles et des conditions pour lesquelles chaque méthode donne de bons résultats. Les résultats de cette analyse sont pris en compte dans un module de planification perceptuelle, qui détermine quels objets (amers plans) le robot doit suivre pour se guider le long d’une trajectoire. Afin de contrôler l’exécution d’un tel plan perceptuel, plusieurs protocoles de collaboration ou d’enchaînement entre méthodes de suivi visuel ont été proposés. Finalement, ces méthodes, ainsi qu’un module de contrôle d’une caméra active (site, azimut, zoom), ont été intégrées sur un robot. Trois expérimentations ont été effectuées: a) le suivi de route en milieu extérieur, b) le suivi de primitives pour la navigation visuelle en milieu intérieur, et c) le suivi d’amers plans pour la navigation fondée sur la localisation explicite du robot. ABSTRACT : The work presented on this thesis concerns the study of visual functionalities over dynamic scenes and their applications to mobile robotics. These visual functionalities consist on visual tracking of objects on image sequences. Four methods of visual tracking has been studied, from which tree of them has been developed specifically for the context of this thesis. These methods are: (1) snakes contours tracking, with two variants, the former, to be able to applying it to a sequence of color images and the latter to consider form constraints of the followed object, (2) the tracking of regions by templates differences, (3) contour tracking by 1D correlation, and (4) the tracking method of a set of points, based on Hausdorff distance, developed on a previous thesis. These methods have been analyzed for different tasks, relatives to mobile robot’s navigation. A comparison for different contexts has been done, given to a characterization of objects and conditions for which each method gives the best results. Results from this analysis has been take into account on a perceptual planification module, that determines which objects (plane landmarks) must be tracked by the robot, to drive it over a trajectory. In order to control the execution of perceptual plan, a lot of collaboration or chaining protocols have been proposed between methods. Finally, these methods and a control module of an active camera (pan, tilt, zoom), has been integrated on a robot. Three experiments have been done: a) road tracking over natural environments, b) primitives tracking for visual navigation over human environments and c) landmark tracking for navigation based on explicit localization of robo

    Information-theoretic environment modeling for mobile robot localization

    Full text link
    To enhance robotic computational efficiency without degenerating accuracy, it is imperative to fit the right and exact amount of information in its simplest form to the investigated task. This thesis conforms to this reasoning in environment model building and robot localization. It puts forth an approach towards building maps and localizing a mobile robot efficiently with respect to unknown, unstructured and moderately dynamic environments. For this, the environment is modeled on an information-theoretic basis, more specifically in terms of its transmission property. Subsequently, the presented environment model, which does not specifically adhere to classical geometric modeling, succeeds in solving the environment disambiguation effectively. The proposed solution lays out a two-level hierarchical structure for localization. The structure makes use of extracted features, which are stored in two different resolutions in a single hybrid feature-map. This enables dual coarse-topological and fine-geometric localization modalities. The first level in the hierarchy describes the environment topologically, where a defined set of places is described by a probabilistic feature representation. A conditional entropy-based criterion is proposed to quantify the transinformation between the feature and the place domains. This criterion provides a double benefit of pruning the large dimensional feature space, and at the same time selecting the best discriminative features that overcome environment aliasing problems. Features with the highest transinformation are filtered and compressed to form a coarse resolution feature-map (codebook). Localization at this level is conducted through place matching. In the second level of the hierarchy, the map is viewed in high-resolution, as consisting of non-compressed entropy-processed features. These features are additionally tagged with their position information. Given the identified topological place provided by the first level, fine localization corresponding to the second level is executed using feature triangulation. To enhance the triangulation accuracy, redundant features are used and two metric evaluating criteria are employ-ed; one for dynamic features and mismatches detection, and another for feature selection. The proposed approach and methods have been tested in realistic indoor environments using a vision sensor and the Scale Invariant Feature Transform local feature extraction. Through experiments, it is demonstrated that an information-theoretic modeling approach is highly efficient in attaining combined accuracy and computational efficiency performances for localization. It has also been proven that the approach is capable of modeling environments with a high degree of unstructuredness, perceptual aliasing, and dynamic variations (illumination conditions; scene dynamics). The merit of employing this modeling type is that environment features are evaluated quantitatively, while at the same time qualitative conclusions are generated about feature selection and performance in a robot localization task. In this way, the accuracy of localization can be adapted in accordance with the available resources. The experimental results also show that the hybrid topological-metric map provides sufficient information to localize a mobile robot on two scales, independent of the robot motion model. The codebook exhibits fast and accurate topological localization at significant compression ratios. The hierarchical localization framework demonstrates robustness and optimized space and time complexities. This, in turn, provides scalability to large environments application and real-time employment adequacies
    corecore