153 research outputs found

    Towards Object-Centric Scene Understanding

    Get PDF
    Visual perception for autonomous agents continues to attract community attention due to the disruptive technologies and the wide applicability of such solutions. Autonomous Driving (AD), a major application in this domain, promises to revolutionize our approach to mobility while bringing critical advantages in limiting accident fatalities. Fueled by recent advances in Deep Learning (DL), more computer vision tasks are being addressed using a learning paradigm. Deep Neural Networks (DNNs) succeeded consistently in pushing performances to unprecedented levels and demonstrating the ability of such approaches to generalize to an increasing number of difficult problems, such as 3D vision tasks. In this thesis, we address two main challenges arising from the current approaches. Namely, the computational complexity of multi-task pipelines, and the increasing need for manual annotations. On the one hand, AD systems need to perceive the surrounding environment on different levels of detail and, subsequently, take timely actions. This multitasking further limits the time available for each perception task. On the other hand, the need for universal generalization of such systems to massively diverse situations requires the use of large-scale datasets covering long-tailed cases. Such requirement renders the use of traditional supervised approaches, despite the data readily available in the AD domain, unsustainable in terms of annotation costs, especially for 3D tasks. Driven by the AD environment nature and the complexity dominated (unlike indoor scenes) by the presence of other scene elements (mainly cars and pedestrians) we focus on the above-mentioned challenges in object-centric tasks. We, then, situate our contributions appropriately in fast-paced literature, while supporting our claims with extensive experimental analysis leveraging up-to-date state-of-the-art results and community-adopted benchmarks

    Shaped-based IMU/Camera Tightly Coupled Object-level SLAM using Rao-Blackwellized Particle Filtering

    Get PDF
    Simultaneous Localization and Mapping (SLAM) is a decades-old problem. The classical solution to this problem utilizes entities such as feature points that cannot facilitate the interactions between a robot and its environment (e.g., grabbing objects). Recent advances in deep learning have paved the way to accurately detect objects in the image under various illumination conditions and occlusions. This led to the emergence of object-level solutions to the SLAM problem. Current object-level methods depend on an initial solution using classical approaches and assume that errors are Gaussian. This research develops a standalone solution to object-level SLAM that integrates the data from a monocular camera and an IMU (available in low-end devices) using Rao Blackwellized Particle Filter (RBPF). RBPF does not assume Gaussian distribution for the error; thus, it can handle a variety of scenarios (such as when a symmetrical object with pose ambiguities is encountered). The developed method utilizes shape instead of texture; therefore, texture-less objects can be incorporated into the solution. In the particle weighing process, a new method is developed that utilizes the Intersection over the Union (IoU) area of the observed and projected boundaries of the object that does not require point-to-point correspondence. Thus, it is not prone to false data correspondences. Landmark initialization is another important challenge for object-level SLAM. In the state-of-the-art delayed initialization, the trajectory estimation only relies on the motion model provided by IMU mechanization (during the initialization), leading to large errors. In this thesis, two novel undelayed initializations are developed. One relies only on a monocular camera and IMU, and the other utilizes an ultrasonic rangefinder as well. The developed object-level SLAM is tested using wheeled robots and handheld devices, and an error (in the position) of 4.1 to 13.1 cm (0.005 to 0.028 of the total path length) has been obtained through extensive experiments using only a single object. These experiments are conducted in different indoor environments under different conditions (e.g. illumination). Further, it is shown that undelayed initialization using an ultrasonic sensor can reduce the algorithm's runtime by half

    Contributions to autonomous robust navigation of mobile robots in industrial applications

    Get PDF
    151 p.Un aspecto en el que las plataformas móviles actuales se quedan atrás en comparación con el punto que se ha alcanzado ya en la industria es la precisión. La cuarta revolución industrial trajo consigo la implantación de maquinaria en la mayor parte de procesos industriales, y una fortaleza de estos es su repetitividad. Los robots móviles autónomos, que son los que ofrecen una mayor flexibilidad, carecen de esta capacidad, principalmente debido al ruido inherente a las lecturas ofrecidas por los sensores y al dinamismo existente en la mayoría de entornos. Por este motivo, gran parte de este trabajo se centra en cuantificar el error cometido por los principales métodos de mapeado y localización de robots móviles,ofreciendo distintas alternativas para la mejora del posicionamiento.Asimismo, las principales fuentes de información con las que los robots móviles son capaces de realizarlas funciones descritas son los sensores exteroceptivos, los cuales miden el entorno y no tanto el estado del propio robot. Por esta misma razón, algunos métodos son muy dependientes del escenario en el que se han desarrollado, y no obtienen los mismos resultados cuando este varía. La mayoría de plataformas móviles generan un mapa que representa el entorno que les rodea, y fundamentan en este muchos de sus cálculos para realizar acciones como navegar. Dicha generación es un proceso que requiere de intervención humana en la mayoría de casos y que tiene una gran repercusión en el posterior funcionamiento del robot. En la última parte del presente trabajo, se propone un método que pretende optimizar este paso para así generar un modelo más rico del entorno sin requerir de tiempo adicional para ello

    EFFECTIVE NAVIGATION AND MAPPING OF A CLUTTERED ENVIRONMENT USING A MOBILE ROBOT

    Get PDF
    Today, the as-is three-dimensional point cloud acquisition process for understanding scenes of interest, monitoring construction progress, and detecting safety hazards uses a laser scanning system mounted on mobile robots, which enables it faster and more automated, but there is still room for improvement. The main disadvantage of data collection using laser scanners is that point cloud data is only collected in a scanner’s line of sight, so regions in three-dimensional space that are occluded by objects are not observable. To solve this problem and obtain a complete reconstruction of sites without information loss, scans must be taken from multiple viewpoints. This thesis describes how such a solution can be integrated into a fully autonomous mobile robot capable of generating a high-resolution three-dimensional point cloud of a cluttered and unknown environment without a prior map. First, the mobile platform estimates unevenness of terrain and surrounding environment. Second, it finds the occluded region in the currently built map and determines the effective next scan location. Then, it moves to that location by using grid-based path planner and unevenness estimation results. Finally, it performs the high-resolution scanning that area to fill out the point cloud map. This process repeats until the designated scan region filled up with scanned point cloud. The mobile platform also keeps scanning for navigation and obstacle avoidance purposes, calculates its relative location, and builds the surrounding map while moving and scanning, a process known as simultaneous localization and mapping. The proposed approaches and the system were tested and validated in an outdoor construction site and a simulated disaster environment with promising results.Ph.D

    A Probabilistic Treatment To Point Cloud Matching And Motion Estimation

    Get PDF
    Probabilistic and efficient motion estimation is paramount in many robotic applications, including state estimation and position tracking. Iterative closest point (ICP) is a popular algorithm that provides ego-motion estimates for mobile robots by matching point cloud pairs. Estimating motion efficiently using ICP is challenging due to the large size of point clouds. Further, sensor noise and environmental uncertainties result in uncertain motion and state estimates. Probabilistic inference is a principled approach to quantify uncertainty but is computationally expensive and thus challenging to use in complex real-time robotics tasks. In this thesis, we address these challenges by leveraging recent advances in optimization and probabilistic inference and present four core contributions. First is SGD-ICP, which employs stochastic gradient descent (SGD) to align two point clouds efficiently. The second is Bayesian-ICP, which combines SGD-ICP with stochastic gradient Langevin dynamics to obtain distributions over transformations efficiently. We also propose an adaptive motion model that employs Bayesian-ICP to produce environment-aware proposal distributions for state estimation. The third is Stein-ICP, a probabilistic ICP technique that exploits GPU parallelism for speed gains. Stein-ICP exploits the Stein variational gradient descent framework to provide non-parametric estimates of the transformation and can model complex multi-modal distributions. The fourth contribution is Stein particle filter, capable of filtering non-Gaussian, high-dimensional dynamical systems. This method can be seen as a deterministic flow of particles from an initial to the desired state. This transport of particles is embedded in a reproducing kernel Hilbert space where particles interact with each other through a repulsive force that brings diversity among the particles

    Annals of Scientific Society for Assembly, Handling and Industrial Robotics 2021

    Get PDF
    This Open Access proceedings presents a good overview of the current research landscape of assembly, handling and industrial robotics. The objective of MHI Colloquium is the successful networking at both academic and management level. Thereby, the colloquium focuses an academic exchange at a high level in order to distribute the obtained research results, to determine synergy effects and trends, to connect the actors in person and in conclusion, to strengthen the research field as well as the MHI community. In addition, there is the possibility to become acquatined with the organizing institute. Primary audience is formed by members of the scientific society for assembly, handling and industrial robotics (WGMHI)

    Collected Papers (Neutrosophics and other topics), Volume XIV

    Get PDF
    This fourteenth volume of Collected Papers is an eclectic tome of 87 papers in Neutrosophics and other fields, such as mathematics, fuzzy sets, intuitionistic fuzzy sets, picture fuzzy sets, information fusion, robotics, statistics, or extenics, comprising 936 pages, published between 2008-2022 in different scientific journals or currently in press, by the author alone or in collaboration with the following 99 co-authors (alphabetically ordered) from 26 countries: Ahmed B. Al-Nafee, Adesina Abdul Akeem Agboola, Akbar Rezaei, Shariful Alam, Marina Alonso, Fran Andujar, Toshinori Asai, Assia Bakali, Azmat Hussain, Daniela Baran, Bijan Davvaz, Bilal Hadjadji, Carlos Díaz Bohorquez, Robert N. Boyd, M. Caldas, Cenap Özel, Pankaj Chauhan, Victor Christianto, Salvador Coll, Shyamal Dalapati, Irfan Deli, Balasubramanian Elavarasan, Fahad Alsharari, Yonfei Feng, Daniela Gîfu, Rafael Rojas Gualdrón, Haipeng Wang, Hemant Kumar Gianey, Noel Batista Hernández, Abdel-Nasser Hussein, Ibrahim M. Hezam, Ilanthenral Kandasamy, W.B. Vasantha Kandasamy, Muthusamy Karthika, Nour Eldeen M. Khalifa, Madad Khan, Kifayat Ullah, Valeri Kroumov, Tapan Kumar Roy, Deepesh Kunwar, Le Thi Nhung, Pedro López, Mai Mohamed, Manh Van Vu, Miguel A. Quiroz-Martínez, Marcel Migdalovici, Kritika Mishra, Mohamed Abdel-Basset, Mohamed Talea, Mohammad Hamidi, Mohammed Alshumrani, Mohamed Loey, Muhammad Akram, Muhammad Shabir, Mumtaz Ali, Nassim Abbas, Munazza Naz, Ngan Thi Roan, Nguyen Xuan Thao, Rishwanth Mani Parimala, Ion Pătrașcu, Surapati Pramanik, Quek Shio Gai, Qiang Guo, Rajab Ali Borzooei, Nimitha Rajesh, Jesús Estupiñan Ricardo, Juan Miguel Martínez Rubio, Saeed Mirvakili, Arsham Borumand Saeid, Saeid Jafari, Said Broumi, Ahmed A. Salama, Nirmala Sawan, Gheorghe Săvoiu, Ganeshsree Selvachandran, Seok-Zun Song, Shahzaib Ashraf, Jayant Singh, Rajesh Singh, Son Hoang Le, Tahir Mahmood, Kenta Takaya, Mirela Teodorescu, Ramalingam Udhayakumar, Maikel Y. Leyva Vázquez, V. Venkateswara Rao, Luige Vlădăreanu, Victor Vlădăreanu, Gabriela Vlădeanu, Michael Voskoglou, Yaser Saber, Yong Deng, You He, Youcef Chibani, Young Bae Jun, Wadei F. Al-Omeri, Hongbo Wang, Zayen Azzouz Omar

    UAV or Drones for Remote Sensing Applications in GPS/GNSS Enabled and GPS/GNSS Denied Environments

    Get PDF
    The design of novel UAV systems and the use of UAV platforms integrated with robotic sensing and imaging techniques, as well as the development of processing workflows and the capacity of ultra-high temporal and spatial resolution data, have enabled a rapid uptake of UAVs and drones across several industries and application domains.This book provides a forum for high-quality peer-reviewed papers that broaden awareness and understanding of single- and multiple-UAV developments for remote sensing applications, and associated developments in sensor technology, data processing and communications, and UAV system design and sensing capabilities in GPS-enabled and, more broadly, Global Navigation Satellite System (GNSS)-enabled and GPS/GNSS-denied environments.Contributions include:UAV-based photogrammetry, laser scanning, multispectral imaging, hyperspectral imaging, and thermal imaging;UAV sensor applications; spatial ecology; pest detection; reef; forestry; volcanology; precision agriculture wildlife species tracking; search and rescue; target tracking; atmosphere monitoring; chemical, biological, and natural disaster phenomena; fire prevention, flood prevention; volcanic monitoring; pollution monitoring; microclimates; and land use;Wildlife and target detection and recognition from UAV imagery using deep learning and machine learning techniques;UAV-based change detection

    A cooperative navigation system with distributed architecture for multiple unmanned aerial vehicles

    Get PDF
    Unmanned aerial vehicles (UAVs) have been widely used in many applications due to, among other features, their versatility, reduced operating cost, and small size. These applications increasingly demand that features related to autonomous navigation be employed, such as mapping. However, the reduced capacity of resources such as, for example, battery and hardware (memory and processing units) can hinder the development of these applications in UAVs. Thus, the collaborative use of multiple UAVs for mapping can be used as an alternative to solve this problem, with a cooperative navigation system. This system requires that individual local maps be transmitted and merged into a global map in a distributed manner. In this scenario, there are two main problems to be addressed: the transmission of maps among the UAVs and the merging of the local maps in each UAV. In this context, this work describes the design, development, and evaluation of a cooperative navigation system with distributed architecture to be used by multiple UAVs. This system uses proposed structures to store the 3D occupancy grid maps. Furthermore, maps are compressed and transmitted between UAVs using algorithms specially proposed for these purposes. Then the local 3D maps are merged in each UAV. In this map merging system, maps are processed before and merged in pairs using suitable algorithms to make them compatible with the 3D occupancy grid map data. In addition, keypoints orientation properties are obtained from potential field gradients. Some proposed filters are used to improve the parameters of the transformations among maps. To validate the proposed solution, simulations were performed in six different environments, outdoors and indoors, and with different layout characteristics. The obtained results demonstrate the effectiveness of thesystemin the construction, sharing, and merging of maps. Still, from the obtained results, the extreme complexity of map merging systems is highlighted.Os veículos aéreos não tripulados (VANTs) têm sidoamplamenteutilizados em muitas aplicações devido, entre outrosrecursos,à sua versatilidade, custo de operação e tamanho reduzidos. Essas aplicações exigem cadavez mais que recursos relacionados à navegaçãoautônoma sejam empregados,como o mapeamento. No entanto, acapacidade reduzida de recursos como, por exemplo, bateria e hardware (memória e capacidade de processamento) podem atrapalhar o desenvolvimento dessas aplicações em VANTs.Assim, o uso colaborativo de múltiplosVANTs para mapeamento pode ser utilizado como uma alternativa para resolvereste problema, criando um sistema de navegaçãocooperativo. Estesistema requer que mapas locais individuais sejam transmitidos efundidos em um mapa global de forma distribuída.Nesse cenário, há doisproblemas principais aserem abordados:a transmissão dosmapas entre os VANTs e afusão dos mapas locais em cada VANT. Nestecontexto, estatese apresentao projeto, desenvolvimento e avaliaçãode um sistema de navegação cooperativo com arquitetura distribuída para ser utilizado pormúltiplos VANTs. Este sistemausa estruturas propostas para armazenaros mapasdegradedeocupação 3D. Além disso, os mapas são compactados e transmitidos entre os VANTs usando os algoritmos propostos. Em seguida, os mapas 3D locais são fundidos em cada VANT. Neste sistemade fusão de mapas, os mapas são processados antes e juntados em pares usando algunsalgoritmos adequados para torná-los compatíveiscom os dados dos mapas da grade de ocupação 3D. Além disso, as propriedadesde orientação dos pontoschave são obtidas a partir de gradientes de campos potenciais. Alguns filtros propostos são utilizadospara melhorar as indicações dos parâmetros dastransformações entre mapas. Paravalidar a aplicação proposta, foram realizadas simulações em seis ambientes distintos, externos e internos, e com características construtivas distintas. Os resultados apresentados demonstram a efetividade do sistema na construção, compartilhamento e fusão dos mapas. Ainda, a partir dos resultados obtidos, destaca-se a extrema complexidade dos sistemas de fusão de mapas

    4º congresso português de ‘Building Information Modelling’ vol. 2 - ptBIM

    Get PDF
    PublishedLivro de atas do Congresso ptBIM 2022, onde se promove um fórum de discussão técnico-científica em língua Portuguesa nas metodologias ‘Building Information Modelling’ (BIM), envolvendo a participação ativa das comunidades profissional e académica das áreas de Arquitetura e Engenharia. Pretende-se enfatizar os problemas e esforços de implementação na Indústria da Construção e reforçar as redes de profissionais que incorporam práticas BIM nas suas atividades. https://ptbim.org
    • …
    corecore