28 research outputs found

    MRSL: AUTONOMOUS NEURAL NETWORK-BASED SELF-STABILIZING SYSTEM

    Get PDF
    Stabilizing and localizing the positioning systems autonomously in the areas without GPS accessibility is a difficult task. In this thesis we describe a methodology called Most Reliable Straight Line (MRSL) for stabilizing and positioning camera-based objects in 3-D space. The camera-captured images are used to identify easy-to-track points “interesting points� and track them on two consecutive images. The distance between each of interesting points on the two consecutive images are compared and one with the maximum length is assigned to MRSL, which is used to indicate the deviation from the original position. To correct this our trained algorithm is deployed to reduce the deviation by issuing relevant commands, this action is repeated until MRSL converges to zero. To test the accuracy and robustness, the algorithm was deployed to control positioning of a Quadcopter. It was demonstrated that the Quadcopter (a) was highly robust to any external forces, (b) can fly even if the Quadcopter experiences loss of engine, (c) can fly smoothly and positions itself on a desired location

    Efficient exploration of unknown indoor environments using a team of mobile robots

    Get PDF
    Whenever multiple robots have to solve a common task, they need to coordinate their actions to carry out the task efficiently and to avoid interferences between individual robots. This is especially the case when considering the problem of exploring an unknown environment with a team of mobile robots. To achieve efficient terrain coverage with the sensors of the robots, one first needs to identify unknown areas in the environment. Second, one has to assign target locations to the individual robots so that they gather new and relevant information about the environment with their sensors. This assignment should lead to a distribution of the robots over the environment in a way that they avoid redundant work and do not interfere with each other by, for example, blocking their paths. In this paper, we address the problem of efficiently coordinating a large team of mobile robots. To better distribute the robots over the environment and to avoid redundant work, we take into account the type of place a potential target is located in (e.g., a corridor or a room). This knowledge allows us to improve the distribution of robots over the environment compared to approaches lacking this capability. To autonomously determine the type of a place, we apply a classifier learned using the AdaBoost algorithm. The resulting classifier takes laser range data as input and is able to classify the current location with high accuracy. We additionally use a hidden Markov model to consider the spatial dependencies between nearby locations. Our approach to incorporate the information about the type of places in the assignment process has been implemented and tested in different environments. The experiments illustrate that our system effectively distributes the robots over the environment and allows them to accomplish their mission faster compared to approaches that ignore the place labels

    Optimization in 3D Scene Acquisition with Two Mobile Robots.

    Get PDF
    This article presents a method for cooperative reconstruction of three-dimensional scenes using multiple views. Although the use of multiple views is one of the most applied in the 3D scenes interpretation, its use with a single mobile robot does not guarantee the perfect localization of the environment, due to odometry errors. Also, the reconstruction of dynamic environments is not allowed for a single robot, which takes views in different instants. This work proposes a system formed by multiple robots to get a cooperative reconstruction of the scene. The robots decide the best strategy to acquire the image to get the best reconstruction using an objective function defined. In this function, mainly are considered uncertainty of the reconstruction and view points. The result of this optimization is the next position of the robots

    Towards a communication free coordination for multi-robot exploration

    Get PDF
    National audienceFrontier-based exploration method are known to be efˇcient for multi-robot exploration systems. In this paper, we propose a novel, computationally inexpensive, frontier allocation method favoring a well balanced spatial distribution of robots in the environment. It is based on the computation of a position criteria. The position of a robot towards a frontier is deˇned by the number of robots closer to the frontier. Distances to frontiers are computed using a wavefront propagation from each frontier. The local minimum free potential ˇelds, thus created, are also used for navigation achieved by descending the potential ˇeld gradient. Comparisons with existing approaches in simulation demonstrated the efˇciency of our algorithm and experiments on robots validated the navigation method

    SyWaP: Synchronized Wavefront Propagation for multi-robot assignment of spatially-situated tasks

    Get PDF
    International audienceTask assignment is paramount for the efficiency of multi-robot missions. In the context of multi-robot exploration of unknown environments, tasks can be identified as targets to reach in order to expand knowledge of the environment. These targets can be frontiers between known reachable area and unknown areas or a best view configuration to observe each frontiers. Once these tasks are identified, they are assigned to robots. Standard approaches work in a two step process where the costs of reaching targets are computed before running a task allocation algorithm. In this article, we combine these two steps to propose an efficient task allocation algorithm. This algorithm can take into account different assignment criteria such as the rank of a robot in a list of robots ordered by cost

    Efficient Belief Propagation for Perception and Manipulation in Clutter

    Full text link
    Autonomous service robots are required to perform tasks in common human indoor environments. To achieve goals associated with these tasks, the robot should continually perceive, reason its environment, and plan to manipulate objects, which we term as goal-directed manipulation. Perception remains the most challenging aspect of all stages, as common indoor environments typically pose problems in recognizing objects under inherent occlusions with physical interactions among themselves. Despite recent progress in the field of robot perception, accommodating perceptual uncertainty due to partial observations remains challenging and needs to be addressed to achieve the desired autonomy. In this dissertation, we address the problem of perception under uncertainty for robot manipulation in cluttered environments using generative inference methods. Specifically, we aim to enable robots to perceive partially observable environments by maintaining an approximate probability distribution as a belief over possible scene hypotheses. This belief representation captures uncertainty resulting from inter-object occlusions and physical interactions, which are inherently present in clutterred indoor environments. The research efforts presented in this thesis are towards developing appropriate state representations and inference techniques to generate and maintain such belief over contextually plausible scene states. We focus on providing the following features to generative inference while addressing the challenges due to occlusions: 1) generating and maintaining plausible scene hypotheses, 2) reducing the inference search space that typically grows exponentially with respect to the number of objects in a scene, 3) preserving scene hypotheses over continual observations. To generate and maintain plausible scene hypotheses, we propose physics informed scene estimation methods that combine a Newtonian physics engine within a particle based generative inference framework. The proposed variants of our method with and without a Monte Carlo step showed promising results on generating and maintaining plausible hypotheses under complete occlusions. We show that estimating such scenarios would not be possible by the commonly adopted 3D registration methods without the notion of a physical context that our method provides. To scale up the context informed inference to accommodate a larger number of objects, we describe a factorization of scene state into object and object-parts to perform collaborative particle-based inference. This resulted in the Pull Message Passing for Nonparametric Belief Propagation (PMPNBP) algorithm that caters to the demands of the high-dimensional multimodal nature of cluttered scenes while being computationally tractable. We demonstrate that PMPNBP is orders of magnitude faster than the state-of-the-art Nonparametric Belief Propagation method. Additionally, we show that PMPNBP successfully estimates poses of articulated objects under various simulated occlusion scenarios. To extend our PMPNBP algorithm for tracking object states over continuous observations, we explore ways to propose and preserve hypotheses effectively over time. This resulted in an augmentation-selection method, where hypotheses are drawn from various proposals followed by the selection of a subset using PMPNBP that explained the current state of the objects. We discuss and analyze our augmentation-selection method with its counterparts in belief propagation literature. Furthermore, we develop an inference pipeline for pose estimation and tracking of articulated objects in clutter. In this pipeline, the message passing module with the augmentation-selection method is informed by segmentation heatmaps from a trained neural network. In our experiments, we show that our proposed pipeline can effectively maintain belief and track articulated objects over a sequence of observations under occlusion.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163159/1/kdesingh_1.pd

    Rastreamento de objeto esférico por cooperação multi-robô

    Get PDF
    In Mobile Robotics, where visual tracking is an extremely important subproblem, one of the solutions found to reduce the problems arising from partial and total occlusion is the use of multiple robots. A system was implemented to investigate the impacts of data fusion on multi-robot visual tracking in three-dimensional space on the occurrence of partial and total occlusion. The system has two approaches to visual tracking of a spherical object. The rst is non-cooperative visual tracking where only the data from a single robot is used. The second is a cooperative visual tracking where the data from a team of robots is fused using a particle lter. To evaluate both approaches, a visual tracking environment with partial and total occlusions was simulated where the tracking was performed by a team of robots. The result of the experiment shows that the noncooperative approach presented a lower computational cost than the cooperative approach but the inferred trajectory was impaired by the occlusions, a fact that did not occur in the cooperative approach due to the data fusion.Na Robótica móvel, área na qual o rastreamento visual é um subproblema de extrema importância, uma das soluções encontradas para reduzir os problemas provenientes da oclusão parcial e total e a utilização de múltiplos robôs. Foi implementado um sistema com o intuito de investigar os impactos da fusão de dados no rastreamento visual no espaço tridimensional por multi-robôs na ocorrência de oclusão parcial e total. O sistema conta com duas abordagens de rastreamento visual de um objeto esférico. A primeira consiste no rastreamento visual n~ao cooperativo onde e utilizado apenas os dados oriundos de um único robô. A segunda consiste no rastreamento visual cooperativo onde é utilizado a fusão dos dados oriundos de uma equipe de robôs pelo emprego do filtro de partículas. Para avaliar ambas abordagens, foi simulado um ambiente de rastreamento visual com oclusões parciais e totais onde o rastreamento foi realizado por uma equipe de robôs. O resultado do experimento mostra que a abordagem não cooperativa apresentou um custo computacional inferior ao da abordagem cooperativa contudo a trajetória inferida foi prejudicada pelas oclusões, fato que não ocorreu na abordagem cooperativa devido a fusão dos dados

    A cooperative navigation system with distributed architecture for multiple unmanned aerial vehicles

    Get PDF
    Unmanned aerial vehicles (UAVs) have been widely used in many applications due to, among other features, their versatility, reduced operating cost, and small size. These applications increasingly demand that features related to autonomous navigation be employed, such as mapping. However, the reduced capacity of resources such as, for example, battery and hardware (memory and processing units) can hinder the development of these applications in UAVs. Thus, the collaborative use of multiple UAVs for mapping can be used as an alternative to solve this problem, with a cooperative navigation system. This system requires that individual local maps be transmitted and merged into a global map in a distributed manner. In this scenario, there are two main problems to be addressed: the transmission of maps among the UAVs and the merging of the local maps in each UAV. In this context, this work describes the design, development, and evaluation of a cooperative navigation system with distributed architecture to be used by multiple UAVs. This system uses proposed structures to store the 3D occupancy grid maps. Furthermore, maps are compressed and transmitted between UAVs using algorithms specially proposed for these purposes. Then the local 3D maps are merged in each UAV. In this map merging system, maps are processed before and merged in pairs using suitable algorithms to make them compatible with the 3D occupancy grid map data. In addition, keypoints orientation properties are obtained from potential field gradients. Some proposed filters are used to improve the parameters of the transformations among maps. To validate the proposed solution, simulations were performed in six different environments, outdoors and indoors, and with different layout characteristics. The obtained results demonstrate the effectiveness of thesystemin the construction, sharing, and merging of maps. Still, from the obtained results, the extreme complexity of map merging systems is highlighted.Os veículos aéreos não tripulados (VANTs) têm sidoamplamenteutilizados em muitas aplicações devido, entre outrosrecursos,à sua versatilidade, custo de operação e tamanho reduzidos. Essas aplicações exigem cadavez mais que recursos relacionados à navegaçãoautônoma sejam empregados,como o mapeamento. No entanto, acapacidade reduzida de recursos como, por exemplo, bateria e hardware (memória e capacidade de processamento) podem atrapalhar o desenvolvimento dessas aplicações em VANTs.Assim, o uso colaborativo de múltiplosVANTs para mapeamento pode ser utilizado como uma alternativa para resolvereste problema, criando um sistema de navegaçãocooperativo. Estesistema requer que mapas locais individuais sejam transmitidos efundidos em um mapa global de forma distribuída.Nesse cenário, há doisproblemas principais aserem abordados:a transmissão dosmapas entre os VANTs e afusão dos mapas locais em cada VANT. Nestecontexto, estatese apresentao projeto, desenvolvimento e avaliaçãode um sistema de navegação cooperativo com arquitetura distribuída para ser utilizado pormúltiplos VANTs. Este sistemausa estruturas propostas para armazenaros mapasdegradedeocupação 3D. Além disso, os mapas são compactados e transmitidos entre os VANTs usando os algoritmos propostos. Em seguida, os mapas 3D locais são fundidos em cada VANT. Neste sistemade fusão de mapas, os mapas são processados antes e juntados em pares usando algunsalgoritmos adequados para torná-los compatíveiscom os dados dos mapas da grade de ocupação 3D. Além disso, as propriedadesde orientação dos pontoschave são obtidas a partir de gradientes de campos potenciais. Alguns filtros propostos são utilizadospara melhorar as indicações dos parâmetros dastransformações entre mapas. Paravalidar a aplicação proposta, foram realizadas simulações em seis ambientes distintos, externos e internos, e com características construtivas distintas. Os resultados apresentados demonstram a efetividade do sistema na construção, compartilhamento e fusão dos mapas. Ainda, a partir dos resultados obtidos, destaca-se a extrema complexidade dos sistemas de fusão de mapas
    corecore