586 research outputs found

    Robotic Wireless Sensor Networks

    Full text link
    In this chapter, we present a literature survey of an emerging, cutting-edge, and multi-disciplinary field of research at the intersection of Robotics and Wireless Sensor Networks (WSN) which we refer to as Robotic Wireless Sensor Networks (RWSN). We define a RWSN as an autonomous networked multi-robot system that aims to achieve certain sensing goals while meeting and maintaining certain communication performance requirements, through cooperative control, learning and adaptation. While both of the component areas, i.e., Robotics and WSN, are very well-known and well-explored, there exist a whole set of new opportunities and research directions at the intersection of these two fields which are relatively or even completely unexplored. One such example would be the use of a set of robotic routers to set up a temporary communication path between a sender and a receiver that uses the controlled mobility to the advantage of packet routing. We find that there exist only a limited number of articles to be directly categorized as RWSN related works whereas there exist a range of articles in the robotics and the WSN literature that are also relevant to this new field of research. To connect the dots, we first identify the core problems and research trends related to RWSN such as connectivity, localization, routing, and robust flow of information. Next, we classify the existing research on RWSN as well as the relevant state-of-the-arts from robotics and WSN community according to the problems and trends identified in the first step. Lastly, we analyze what is missing in the existing literature, and identify topics that require more research attention in the future

    World Models for Robust Robotic Systems

    Get PDF

    Navigating Mobile Robots In Wireless Sensor Networks

    Get PDF
    Tez (Yüksek Lisans) -- İstanbul Teknik Üniversitesi, Fen Bilimleri Enstitüsü, 2009Thesis (M.Sc.) -- İstanbul Technical University, Institute of Science and Technology, 2009Bu çalışmada, gezgin ve kablosuz haberleşme kabiliyetine sahip robotların bir IEEE standardı olan ve ZigBee adı verilen haberleşme sistemi üzerine kurulu kablosuz algılayıcı ağda dolanımı modellenmiştir. Hop-Count numaralandırma sistemi ile herhangi bilinmeyen bir bölgeye kurulu olan ağdaki rasgele yerleştirilmiş olan algılayıcıların hedefi bulması ile yine rasgele bir noktadan hareketine başlayan gezgin robotun hedef noktaya ulaştırılması istenmiştir. Gezgin robot ve algılayıcılarda kullanılacak yeni bir protokol geliştirilmiş ve haberleşme bu sistem üzerinden sağlanmıştır. Konumlandırma için Gelen Sinyal Gücü Göstergesi (GSGG) metodundan faydalanılmış ve mesafe ölçümleri bu sayede gerçekleştirilmiştir. Ulaşılması istenen hedef düğümün yaydığı özel bir işaret sinyali sayesinde rasgele dağıtılmış olan sensörler bu hedefin yerini bularak ağ içerisinde yine belirli olmayan bir noktadaki gezgin robota bilgi vermektedir. Robot ise algılayıcıların göndermiş olduğu sinyalleri kullanarak hedefe doğru hareket etmektedir. Aynı zamanda bu modelin simulasyonu olan SOLAN adlı bir yazılım gerçekleştirilmiş ve bu yazılım sayesinde hata payları gözlemlenmiştir. Ayrıca çalışma içerisine PKD (Patika Kalite Değeri – Path Quality Value) önerilmiş ve gezgin robotun birden fazla olası yönden daha kısa olanı seçebilmesi sağlanmıştır.In this study navigation and localization of a mobile robot in a IEEE spec which named ZigBee based Wireless Sensor Network was modelled. The proposal is letting a mobile robot (or AGV) to find and reach a destination node by following the paths that are created by sensors that randomly scattered in the unknown area where ad-hoc wireless sensor network was deployed by hop-count numerating method. A new protocol was developed to be used in mobile robot and sensor nodes as a communication platform. RSSI (Received Signal Strength Indicator) method was deplyed to measure distance. The sensors in the network determines the position of the target node by the spesific signal that target broadcasts from an unknown location and provide path information to the destination for mobile robot.Yüksek LisansM.Sc

    Integrasjon av et minimalistisk sett av sensorer for kartlegging og lokalisering av landbruksroboter

    Get PDF
    Robots have recently become ubiquitous in many aspects of daily life. For in-house applications there is vacuuming, mopping and lawn-mowing robots. Swarms of robots have been used in Amazon warehouses for several years. Autonomous driving cars, despite being set back by several safety issues, are undeniably becoming the standard of the automobile industry. Not just being useful for commercial applications, robots can perform various tasks, such as inspecting hazardous sites, taking part in search-and-rescue missions. Regardless of end-user applications, autonomy plays a crucial role in modern robots. The essential capabilities required for autonomous operations are mapping, localization and navigation. The goal of this thesis is to develop a new approach to solve the problems of mapping, localization, and navigation for autonomous robots in agriculture. This type of environment poses some unique challenges such as repetitive patterns, large-scale sparse features environments, in comparison to other scenarios such as urban/cities, where the abundance of good features such as pavements, buildings, road lanes, traffic signs, etc., exists. In outdoor agricultural environments, a robot can rely on a Global Navigation Satellite System (GNSS) to determine its whereabouts. It is often limited to the robot's activities to accessible GNSS signal areas. It would fail for indoor environments. In this case, different types of exteroceptive sensors such as (RGB, Depth, Thermal) cameras, laser scanner, Light Detection and Ranging (LiDAR) and proprioceptive sensors such as Inertial Measurement Unit (IMU), wheel-encoders can be fused to better estimate the robot's states. Generic approaches of combining several different sensors often yield superior estimation results but they are not always optimal in terms of cost-effectiveness, high modularity, reusability, and interchangeability. For agricultural robots, it is equally important for being robust for long term operations as well as being cost-effective for mass production. We tackle this challenge by exploring and selectively using a handful of sensors such as RGB-D cameras, LiDAR and IMU for representative agricultural environments. The sensor fusion algorithms provide high precision and robustness for mapping and localization while at the same time assuring cost-effectiveness by employing only the necessary sensors for a task at hand. In this thesis, we extend the LiDAR mapping and localization methods for normal urban/city scenarios to cope with the agricultural environments where the presence of slopes, vegetation, trees render the traditional approaches to fail. Our mapping method substantially reduces the memory footprint for map storing, which is important for large-scale farms. We show how to handle the localization problem in dynamic growing strawberry polytunnels by using only a stereo visual-inertial (VI) and depth sensor to extract and track only invariant features. This eliminates the need for remapping to deal with dynamic scenes. Also, for a demonstration of the minimalistic requirement for autonomous agricultural robots, we show the ability to autonomously traverse between rows in a difficult environment of zigzag-liked polytunnel using only a laser scanner. Furthermore, we present an autonomous navigation capability by using only a camera without explicitly performing mapping or localization. Finally, our mapping and localization methods are generic and platform-agnostic, which can be applied to different types of agricultural robots. All contributions presented in this thesis have been tested and validated on real robots in real agricultural environments. All approaches have been published or submitted in peer-reviewed conference papers and journal articles.Roboter har nylig blitt standard i mange deler av hverdagen. I hjemmet har vi støvsuger-, vaske- og gressklippende roboter. Svermer med roboter har blitt brukt av Amazons varehus i mange år. Autonome selvkjørende biler, til tross for å ha vært satt tilbake av sikkerhetshensyn, er udiskutabelt på vei til å bli standarden innen bilbransjen. Roboter har mer nytte enn rent kommersielt bruk. Roboter kan utføre forskjellige oppgaver, som å inspisere farlige områder og delta i leteoppdrag. Uansett hva sluttbrukeren velger å gjøre, spiller autonomi en viktig rolle i moderne roboter. De essensielle egenskapene for autonome operasjoner i landbruket er kartlegging, lokalisering og navigering. Denne type miljø gir spesielle utfordringer som repetitive mønstre og storskala miljø med få landskapsdetaljer, sammenlignet med andre steder, som urbane-/bymiljø, hvor det finnes mange landskapsdetaljer som fortau, bygninger, trafikkfelt, trafikkskilt, etc. I utendørs jordbruksmiljø kan en robot bruke Global Navigation Satellite System (GNSS) til å navigere sine omgivelser. Dette begrenser robotens aktiviteter til områder med tilgjengelig GNSS signaler. Dette vil ikke fungere i miljøer innendørs. I ett slikt tilfelle vil reseptorer mot det eksterne miljø som (RGB-, dybde-, temperatur-) kameraer, laserskannere, «Light detection and Ranging» (LiDAR) og propriopsjonære detektorer som treghetssensorer (IMU) og hjulenkodere kunne brukes sammen for å bedre kunne estimere robotens tilstand. Generisk kombinering av forskjellige sensorer fører til overlegne estimeringsresultater, men er ofte suboptimale med hensyn på kostnadseffektivitet, moduleringingsgrad og utbyttbarhet. For landbruksroboter så er det like viktig med robusthet for lang tids bruk som kostnadseffektivitet for masseproduksjon. Vi taklet denne utfordringen med å utforske og selektivt velge en håndfull sensorer som RGB-D kameraer, LiDAR og IMU for representative landbruksmiljø. Algoritmen som kombinerer sensorsignalene gir en høy presisjonsgrad og robusthet for kartlegging og lokalisering, og gir samtidig kostnadseffektivitet med å bare bruke de nødvendige sensorene for oppgaven som skal utføres. I denne avhandlingen utvider vi en LiDAR kartlegging og lokaliseringsmetode normalt brukt i urbane/bymiljø til å takle landbruksmiljø, hvor hellinger, vegetasjon og trær gjør at tradisjonelle metoder mislykkes. Vår metode reduserer signifikant lagringsbehovet for kartlagring, noe som er viktig for storskala gårder. Vi viser hvordan lokaliseringsproblemet i dynamisk voksende jordbær-polytuneller kan løses ved å bruke en stereo visuel inertiel (VI) og en dybdesensor for å ekstrahere statiske objekter. Dette eliminerer behovet å kartlegge på nytt for å klare dynamiske scener. I tillegg demonstrerer vi de minimalistiske kravene for autonome jordbruksroboter. Vi viser robotens evne til å bevege seg autonomt mellom rader i ett vanskelig miljø med polytuneller i sikksakk-mønstre ved bruk av kun en laserskanner. Videre presenterer vi en autonom navigeringsevne ved bruk av kun ett kamera uten å eksplisitt kartlegge eller lokalisere. Til slutt viser vi at kartleggings- og lokaliseringsmetodene er generiske og platform-agnostiske, noe som kan brukes med flere typer jordbruksroboter. Alle bidrag presentert i denne avhandlingen har blitt testet og validert med ekte roboter i ekte landbruksmiljø. Alle forsøk har blitt publisert eller sendt til fagfellevurderte konferansepapirer og journalartikler

    Collaborative autonomy in heterogeneous multi-robot systems

    Get PDF
    As autonomous mobile robots become increasingly connected and widely deployed in different domains, managing multiple robots and their interaction is key to the future of ubiquitous autonomous systems. Indeed, robots are not individual entities anymore. Instead, many robots today are deployed as part of larger fleets or in teams. The benefits of multirobot collaboration, specially in heterogeneous groups, are multiple. Significantly higher degrees of situational awareness and understanding of their environment can be achieved when robots with different operational capabilities are deployed together. Examples of this include the Perseverance rover and the Ingenuity helicopter that NASA has deployed in Mars, or the highly heterogeneous robot teams that explored caves and other complex environments during the last DARPA Sub-T competition. This thesis delves into the wide topic of collaborative autonomy in multi-robot systems, encompassing some of the key elements required for achieving robust collaboration: solving collaborative decision-making problems; securing their operation, management and interaction; providing means for autonomous coordination in space and accurate global or relative state estimation; and achieving collaborative situational awareness through distributed perception and cooperative planning. The thesis covers novel formation control algorithms, and new ways to achieve accurate absolute or relative localization within multi-robot systems. It also explores the potential of distributed ledger technologies as an underlying framework to achieve collaborative decision-making in distributed robotic systems. Throughout the thesis, I introduce novel approaches to utilizing cryptographic elements and blockchain technology for securing the operation of autonomous robots, showing that sensor data and mission instructions can be validated in an end-to-end manner. I then shift the focus to localization and coordination, studying ultra-wideband (UWB) radios and their potential. I show how UWB-based ranging and localization can enable aerial robots to operate in GNSS-denied environments, with a study of the constraints and limitations. I also study the potential of UWB-based relative localization between aerial and ground robots for more accurate positioning in areas where GNSS signals degrade. In terms of coordination, I introduce two new algorithms for formation control that require zero to minimal communication, if enough degree of awareness of neighbor robots is available. These algorithms are validated in simulation and real-world experiments. The thesis concludes with the integration of a new approach to cooperative path planning algorithms and UWB-based relative localization for dense scene reconstruction using lidar and vision sensors in ground and aerial robots

    2020 NASA Technology Taxonomy

    Get PDF
    This document is an update (new photos used) of the PDF version of the 2020 NASA Technology Taxonomy that will be available to download on the OCT Public Website. The updated 2020 NASA Technology Taxonomy, or "technology dictionary", uses a technology discipline based approach that realigns like-technologies independent of their application within the NASA mission portfolio. This tool is meant to serve as a common technology discipline-based communication tool across the agency and with its partners in other government agencies, academia, industry, and across the world

    Coordinated Sensor-Based Area Coverage and Cooperative Localization of a Heterogeneous Fleet of Autonomous Surface Vessels (ASVs)

    Get PDF
    Sensor coverage with fleets of robots is a complex task requiring solutions to localization, communication, navigation and basic sensor coverage. Sensor coverage of large areas is a problem that occurs in a variety of different environments from terrestrial to aerial to aquatic. In this thesis we consider the aquatic version of the problem. Given a known aquatic environment and collection of aquatic surface vehicles with known kinematic and dynamic constraints, how can a fleet of vehicles be deployed to provide sensor coverage of the surface of the body of water? Rather than considering this problem in general, in this work we consider the problem given a specific fleet consisting of one very well equipped robot aided by a number of smaller, less well equipped devices that must operate in close proximity to the main robot. A boustrophedon decomposition algorithm is developed that incorporates the motion, sensing and communication constraints imposed by the autonomous fleet. Solving the coverage problem leads to a localization/communication problem. A critical problem for a group of autonomous vehicles is ensuring that the collection operates within a common reference frame. Here we consider the problem of localizing a heterogenous collection of aquatic surface vessels within a global reference frame. We assume that one vessel -- the mother robot -- has access to global position data of high accuracy, while the other vessels -- the child robots -- utilize limited onboard sensors and sophisticated sensors on board the mother robot to localize themselves. This thesis provides details of the design of the elements of the heterogeneous fleet including the sensors and sensing algorithms along with the communication strategy used to localize all elements of the fleet within a global reference frame. Details of the robot platforms to be used in implementing a solution are also described. Simulation of the approach is used to demonstrate the effectiveness of the algorithm, and the algorithm and its components are evaluated using a fleet of ASVs

    Adaptive Robot Framework: Providing Versatility and Autonomy to Manufacturing Robots Through FSM, Skills and Agents

    Get PDF
    207 p.The main conclusions that can be extracted from an analysis of the current situation and future trends of the industry,in particular manufacturing plants, are the following: there is a growing need to provide customization of products, ahigh variation of production volumes and a downward trend in the availability of skilled operators due to the ageingof the population. Adapting to this new scenario is a challenge for companies, especially small and medium-sizedenterprises (SMEs) that are suffering first-hand how their specialization is turning against them.The objective of this work is to provide a tool that can serve as a basis to face these challenges in an effective way.Therefore the presented framework, thanks to its modular architecture, allows focusing on the different needs of eachparticular company and offers the possibility of scaling the system for future requirements. The presented platform isdivided into three layers, namely: interface with robot systems, the execution engine and the application developmentlayer.Taking advantage of the provided ecosystem by this framework, different modules have been developed in order toface the mentioned challenges of the industry. On the one hand, to address the need of product customization, theintegration of tools that increase the versatility of the cell are proposed. An example of such tools is skill basedprogramming. By applying this technique a process can be intuitively adapted to the variations or customizations thateach product requires. The use of skills favours the reuse and generalization of developed robot programs.Regarding the variation of the production volumes, a system which permits a greater mobility and a faster reconfigurationis necessary. If in a certain situation a line has a production peak, mechanisms for balancing the loadwith a reasonable cost are required. In this respect, the architecture allows an easy integration of different roboticsystems, actuators, sensors, etc. In addition, thanks to the developed calibration and set-up techniques, the system canbe adapted to new workspaces at an effective time/cost.With respect to the third mentioned topic, an agent-based monitoring system is proposed. This module opens up amultitude of possibilities for the integration of auxiliary modules of protection and security for collaboration andinteraction between people and robots, something that will be necessary in the not so distant future.For demonstrating the advantages and adaptability improvement of the developed framework, a series of real usecases have been presented. In each of them different problematic has been resolved using developed skills,demonstrating how are adapted easily to the different casuistic

    Advances in Robot Navigation

    Get PDF
    Robot navigation includes different interrelated activities such as perception - obtaining and interpreting sensory information; exploration - the strategy that guides the robot to select the next direction to go; mapping - the construction of a spatial representation by using the sensory information perceived; localization - the strategy to estimate the robot position within the spatial map; path planning - the strategy to find a path towards a goal location being optimal or not; and path execution, where motor actions are determined and adapted to environmental changes. This book integrates results from the research work of authors all over the world, addressing the abovementioned activities and analyzing the critical implications of dealing with dynamic environments. Different solutions providing adaptive navigation are taken from nature inspiration, and diverse applications are described in the context of an important field of study: social robotics

    Autonomous Navigation for Unmanned Aerial Systems - Visual Perception and Motion Planning

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen
    corecore