817 research outputs found

    A minimalistic approach to appearance-based visual SLAM

    Get PDF
    This paper presents a vision-based approach to SLAM in indoor / outdoor environments with minimalistic sensing and computational requirements. The approach is based on a graph representation of robot poses, using a relaxation algorithm to obtain a globally consistent map. Each link corresponds to a relative measurement of the spatial relation between the two nodes it connects. The links describe the likelihood distribution of the relative pose as a Gaussian distribution. To estimate the covariance matrix for links obtained from an omni-directional vision sensor, a novel method is introduced based on the relative similarity of neighbouring images. This new method does not require determining distances to image features using multiple view geometry, for example. Combined indoor and outdoor experiments demonstrate that the approach can handle qualitatively different environments (without modification of the parameters), that it can cope with violations of the “flat floor assumption” to some degree, and that it scales well with increasing size of the environment, producing topologically correct and geometrically accurate maps at low computational cost. Further experiments demonstrate that the approach is also suitable for combining multiple overlapping maps, e.g. for solving the multi-robot SLAM problem with unknown initial poses

    Featureless visual processing for SLAM in changing outdoor environments

    Get PDF
    Vision-based SLAM is mostly a solved problem providing clear, sharp images can be obtained. However, in outdoor environments a number of factors such as rough terrain, high speeds and hardware limitations can result in these conditions not being met. High speed transit on rough terrain can lead to image blur and under/over exposure, problems that cannot easily be dealt with using low cost hardware. Furthermore, recently there has been a growth in interest in lifelong autonomy for robots, which brings with it the challenge in outdoor environments of dealing with a moving sun and lack of constant artificial lighting. In this paper, we present a lightweight approach to visual localization and visual odometry that addresses the challenges posed by perceptual change and low cost cameras. The approach combines low resolution imagery with the SLAM algorithm, RatSLAM. We test the system using a cheap consumer camera mounted on a small vehicle in a mixed urban and vegetated environment, at times ranging from dawn to dusk and in conditions ranging from sunny weather to rain. We first show that the system is able to provide reliable mapping and recall over the course of the day and incrementally incorporate new visual scenes from different times into an existing map. We then restrict the system to only learning visual scenes at one time of day, and show that the system is still able to localize and map at other times of day. The results demonstrate the viability of the approach in situations where image quality is poor and environmental or hardware factors preclude the use of visual features

    Instructions on Small Screens : Analysing the Multimodality of Technical Communication Through a Design Experiment

    Get PDF
    TÀssÀ tutkielmassa analysoin teknisen viestinnÀn multimodaalisuutta kokeellisen suunnittelun avulla. Kokeessani suunnittelen ja konvertoin Àlylasien pienelle nÀytölle kolme lyhyttÀ KONE Oyj:n asennus- ja huolto-ohjetta. Vaikka kÀytÀn kokeessani Àlylaseja, tutkimuksen nÀyttö voisi periaatteessa olla mikÀ tahansa pieni nÀyttö, esimerkiksi Àlypuhelin tai Àlykello, jonka ajantasainen sisÀltö on teoriassa helpommin kuljetettavissa mukana kuin paperille tulostettu perinteinen PDF-ohje. Konvertoin ohjeet kÀyttÀen kahta teoriaa: visuaaliset ohjeet (Gattullo et al. 2019) ja minimalismiheuristiikka (van der Meij ja Carroll, 1998). YmmÀrtÀÀkseni paremmin ohjeiden koko kÀyttökontekstia, rakennan konversioiden testaamiseen yhteistyönÀ KONE Oyj:ssÀ kÀyttÀjÀtestiympÀristön simuloimaan ammattimaista hissin asennus- ja huoltoympÀristöÀ. Vaikka nykytekniikka mahdollistaa digitaalisten, pienten nÀyttöjen kÀytön, ohjeiden tarkoitus ei muutu: niiden pitÀÀ ymmÀrrettÀvÀsti auttaa lukijaa suorittamaan tehtÀvÀnsÀ. TÀten konversio- ja suunnitteluteorioiden vastapainoksi multimodaalisuuden teoriat (esimerkiksi, Bateman, Wildfeuer ja Hiippala, 2017) auttavat analysoimaan konversioiden ymmÀrrettÀvyyden eroja systemaattisesti. KÀytÀn tutkielmassani multimodaalisuuden teorioita ymmÀrtÀÀkseni konversioiden vaikutukset ohjeiden ymmÀrrettÀvyyteen. Multimodaalisuuden teorioiden avulla tunnistan ohjeiden kÀyttötilanteen, kÀytetyn median (Àlylasit) ominaisuudet, sekÀ rajaan varsinaiseksi tutkimuskohteekseni konvertoiduilta ohjenÀytöiltÀ tunnistamani semioottiset moodit ja niiden vaikutukset konvertoitujen ohjeiden ymmÀrtÀmiseen. JohtopÀÀtöksinÀ esitÀn, ettei yksittÀisiÀ konvertoituja ohjenÀyttöjÀ tutkimalla synny mimimalismiheuristiikan osalta ymmÀrrettÀvyyden kannalta merkittÀviÀ eroja lÀhtötilanteen PDF-ohjeeseen nÀhden, lukuun ottamatta muutamien helposti pÀÀteltÀvien kohtien poisjÀttÀmistÀ. Yleisesti ottaen molemmissa konversioissa Àlylaseille siirtyy multimodaalisesti samankaltainen, kaksiulotteista sivunÀkymÀÀ hyödyntÀvÀ ohje kuin lÀhtötilanteen PDF. Koska toinen tutkimani teoria, visuaaliset ohjeet, perustuvat verbien korvaamiseen symboleilla, symbolien ymmÀrrettÀvyys korostuu merkittÀvÀnÀ erona visuaalisten ohjeiden kÀytettÀvyydessÀ. JohtopÀÀtöksiÀ selventÀÀ, etten hyödynnÀ Àlylasien kaikkia ilmaisukeinoja, kuten liikkuvaa kuvaa ja ÀÀntÀ, koska kokeessani huomioin kustannustehokkaan, teollisten ohjeiden tuotantoprosessin. Lopuksi ehdotan erityisesti teknisen viestinnÀn viitekehyksessÀ jatkotutkimuksen aiheiksi uusien digitaalisten medioiden kaikkien ominaisuuksien ja niiden multimodaalisten kÀyttötilanteiden tutkimista ja hyödyntÀmistÀ, pienien nÀyttöjen sisÀllöntuotannon standardisoinnin tutkimista ja kehitystÀ, sekÀ symbolien ymmÀrrettÀvyyden tutkimista

    Low-Resolution Vision for Autonomous Mobile Robots

    Get PDF
    The goal of this research is to develop algorithms using low-resolution images to perceive and understand a typical indoor environment and thereby enable a mobile robot to autonomously navigate such an environment. We present techniques for three problems: autonomous exploration, corridor classification, and minimalistic geometric representation of an indoor environment for navigation. First, we present a technique for mobile robot exploration in unknown indoor environments using only a single forward-facing camera. Rather than processing all the data, the method intermittently examines only small 32X24 downsampled grayscale images. We show that for the task of indoor exploration the visual information is highly redundant, allowing successful navigation even using only a small fraction (0.02%) of the available data. The method keeps the robot centered in the corridor by estimating two state parameters: the orientation within the corridor and the distance to the end of the corridor. The orientation is determined by combining the results of five complementary measures, while the estimated distance to the end combines the results of three complementary measures. These measures, which are predominantly information-theoretic, are analyzed independently, and the combined system is tested in several unknown corridor buildings exhibiting a wide variety of appearances, showing the sufficiency of low-resolution visual information for mobile robot exploration. Because the algorithm discards such a large percentage (99.98%) of the information both spatially and temporally, processing occurs at an average of 1000 frames per second, or equivalently takes a small fraction of the CPU. Second, we present an algorithm using image entropy to detect and classify corridor junctions from low resolution images. Because entropy can be used to perceive depth, it can be used to detect an open corridor in a set of images recorded by turning a robot at a junction by 360 degrees. Our algorithm involves detecting peaks from continuously measured entropy values and determining the angular distance between the detected peaks to determine the type of junction that was recorded (either middle, L-junction, T-junction, dead-end, or cross junction). We show that the same algorithm can be used to detect open corridors from both monocular as well as omnidirectional images. Third, we propose a minimalistic corridor representation consisting of the orientation line (center) and the wall-floor boundaries (lateral limit). The representation is extracted from low-resolution images using a novel combination of information theoretic measures and gradient cues. Our study investigates the impact of image resolution upon the accuracy of extracting such a geometry, showing that centerline and wall-floor boundaries can be estimated with reasonable accuracy even in texture-poor environments with low-resolution images. In a database of 7 unique corridor sequences for orientation measurements, less than 2% additional error was observed as the resolution of the image decreased by 99.9%

    2D Visual Place Recognition for Domestic Service Robots at Night

    Get PDF
    Domestic service robots such as lawn mowing and vacuum cleaning robots are the most numerous consumer robots in existence today. While early versions employed random exploration, recent systems fielded by most of the major manufacturers have utilized range-based and visual sensors and user-placed beacons to enable robots to map and localize. However, active range and visual sensing solutions have the disadvantages of being intrusive, expensive, or only providing a 1D scan of the environment, while the requirement for beacon placement imposes other practical limitations. In this paper we present a passive and potentially cheap vision-based solution to 2D localization at night that combines easily obtainable day-time maps with low resolution contrast-normalized image matching algorithms, image sequence-based matching in two-dimensions, place match interpolation and recent advances in conventional low light camera technology. In a range of experiments over a domestic lawn and in a lounge room, we demonstrate that the proposed approach enables 2D localization at night, and analyse the effect on performance of varying odometry noise levels, place match interpolation and sequence matching length. Finally we benchmark the new low light camera technology and show how it can enable robust place recognition even in an environment lit only by a moonless sky, raising the tantalizing possibility of being able to apply all conventional vision algorithms, even in the darkest of nights

    Integrasjon av et minimalistisk sett av sensorer for kartlegging og lokalisering av landbruksroboter

    Get PDF
    Robots have recently become ubiquitous in many aspects of daily life. For in-house applications there is vacuuming, mopping and lawn-mowing robots. Swarms of robots have been used in Amazon warehouses for several years. Autonomous driving cars, despite being set back by several safety issues, are undeniably becoming the standard of the automobile industry. Not just being useful for commercial applications, robots can perform various tasks, such as inspecting hazardous sites, taking part in search-and-rescue missions. Regardless of end-user applications, autonomy plays a crucial role in modern robots. The essential capabilities required for autonomous operations are mapping, localization and navigation. The goal of this thesis is to develop a new approach to solve the problems of mapping, localization, and navigation for autonomous robots in agriculture. This type of environment poses some unique challenges such as repetitive patterns, large-scale sparse features environments, in comparison to other scenarios such as urban/cities, where the abundance of good features such as pavements, buildings, road lanes, traffic signs, etc., exists. In outdoor agricultural environments, a robot can rely on a Global Navigation Satellite System (GNSS) to determine its whereabouts. It is often limited to the robot's activities to accessible GNSS signal areas. It would fail for indoor environments. In this case, different types of exteroceptive sensors such as (RGB, Depth, Thermal) cameras, laser scanner, Light Detection and Ranging (LiDAR) and proprioceptive sensors such as Inertial Measurement Unit (IMU), wheel-encoders can be fused to better estimate the robot's states. Generic approaches of combining several different sensors often yield superior estimation results but they are not always optimal in terms of cost-effectiveness, high modularity, reusability, and interchangeability. For agricultural robots, it is equally important for being robust for long term operations as well as being cost-effective for mass production. We tackle this challenge by exploring and selectively using a handful of sensors such as RGB-D cameras, LiDAR and IMU for representative agricultural environments. The sensor fusion algorithms provide high precision and robustness for mapping and localization while at the same time assuring cost-effectiveness by employing only the necessary sensors for a task at hand. In this thesis, we extend the LiDAR mapping and localization methods for normal urban/city scenarios to cope with the agricultural environments where the presence of slopes, vegetation, trees render the traditional approaches to fail. Our mapping method substantially reduces the memory footprint for map storing, which is important for large-scale farms. We show how to handle the localization problem in dynamic growing strawberry polytunnels by using only a stereo visual-inertial (VI) and depth sensor to extract and track only invariant features. This eliminates the need for remapping to deal with dynamic scenes. Also, for a demonstration of the minimalistic requirement for autonomous agricultural robots, we show the ability to autonomously traverse between rows in a difficult environment of zigzag-liked polytunnel using only a laser scanner. Furthermore, we present an autonomous navigation capability by using only a camera without explicitly performing mapping or localization. Finally, our mapping and localization methods are generic and platform-agnostic, which can be applied to different types of agricultural robots. All contributions presented in this thesis have been tested and validated on real robots in real agricultural environments. All approaches have been published or submitted in peer-reviewed conference papers and journal articles.Roboter har nylig blitt standard i mange deler av hverdagen. I hjemmet har vi stÞvsuger-, vaske- og gressklippende roboter. Svermer med roboter har blitt brukt av Amazons varehus i mange Är. Autonome selvkjÞrende biler, til tross for Ä ha vÊrt satt tilbake av sikkerhetshensyn, er udiskutabelt pÄ vei til Ä bli standarden innen bilbransjen. Roboter har mer nytte enn rent kommersielt bruk. Roboter kan utfÞre forskjellige oppgaver, som Ä inspisere farlige omrÄder og delta i leteoppdrag. Uansett hva sluttbrukeren velger Ä gjÞre, spiller autonomi en viktig rolle i moderne roboter. De essensielle egenskapene for autonome operasjoner i landbruket er kartlegging, lokalisering og navigering. Denne type miljÞ gir spesielle utfordringer som repetitive mÞnstre og storskala miljÞ med fÄ landskapsdetaljer, sammenlignet med andre steder, som urbane-/bymiljÞ, hvor det finnes mange landskapsdetaljer som fortau, bygninger, trafikkfelt, trafikkskilt, etc. I utendÞrs jordbruksmiljÞ kan en robot bruke Global Navigation Satellite System (GNSS) til Ä navigere sine omgivelser. Dette begrenser robotens aktiviteter til omrÄder med tilgjengelig GNSS signaler. Dette vil ikke fungere i miljÞer innendÞrs. I ett slikt tilfelle vil reseptorer mot det eksterne miljÞ som (RGB-, dybde-, temperatur-) kameraer, laserskannere, «Light detection and Ranging» (LiDAR) og propriopsjonÊre detektorer som treghetssensorer (IMU) og hjulenkodere kunne brukes sammen for Ä bedre kunne estimere robotens tilstand. Generisk kombinering av forskjellige sensorer fÞrer til overlegne estimeringsresultater, men er ofte suboptimale med hensyn pÄ kostnadseffektivitet, moduleringingsgrad og utbyttbarhet. For landbruksroboter sÄ er det like viktig med robusthet for lang tids bruk som kostnadseffektivitet for masseproduksjon. Vi taklet denne utfordringen med Ä utforske og selektivt velge en hÄndfull sensorer som RGB-D kameraer, LiDAR og IMU for representative landbruksmiljÞ. Algoritmen som kombinerer sensorsignalene gir en hÞy presisjonsgrad og robusthet for kartlegging og lokalisering, og gir samtidig kostnadseffektivitet med Ä bare bruke de nÞdvendige sensorene for oppgaven som skal utfÞres. I denne avhandlingen utvider vi en LiDAR kartlegging og lokaliseringsmetode normalt brukt i urbane/bymiljÞ til Ä takle landbruksmiljÞ, hvor hellinger, vegetasjon og trÊr gjÞr at tradisjonelle metoder mislykkes. VÄr metode reduserer signifikant lagringsbehovet for kartlagring, noe som er viktig for storskala gÄrder. Vi viser hvordan lokaliseringsproblemet i dynamisk voksende jordbÊr-polytuneller kan lÞses ved Ä bruke en stereo visuel inertiel (VI) og en dybdesensor for Ä ekstrahere statiske objekter. Dette eliminerer behovet Ä kartlegge pÄ nytt for Ä klare dynamiske scener. I tillegg demonstrerer vi de minimalistiske kravene for autonome jordbruksroboter. Vi viser robotens evne til Ä bevege seg autonomt mellom rader i ett vanskelig miljÞ med polytuneller i sikksakk-mÞnstre ved bruk av kun en laserskanner. Videre presenterer vi en autonom navigeringsevne ved bruk av kun ett kamera uten Ä eksplisitt kartlegge eller lokalisere. Til slutt viser vi at kartleggings- og lokaliseringsmetodene er generiske og platform-agnostiske, noe som kan brukes med flere typer jordbruksroboter. Alle bidrag presentert i denne avhandlingen har blitt testet og validert med ekte roboter i ekte landbruksmiljÞ. Alle forsÞk har blitt publisert eller sendt til fagfellevurderte konferansepapirer og journalartikler

    Digital Twins Are Not Monozygotic -- Cross-Replicating ADAS Testing in Two Industry-Grade Automotive Simulators

    Get PDF
    The increasing levels of software- and data-intensive driving automation call for an evolution of automotive software testing. As a recommended practice of the Verification and Validation (V&V) process of ISO/PAS 21448, a candidate standard for safety of the intended functionality for road vehicles, simulation-based testing has the potential to reduce both risks and costs. There is a growing body of research on devising test automation techniques using simulators for Advanced Driver-Assistance Systems (ADAS). However, how similar are the results if the same test scenarios are executed in different simulators? We conduct a replication study of applying a Search-Based Software Testing (SBST) solution to a real-world ADAS (PeVi, a pedestrian vision detection system) using two different commercial simulators, namely, TASS/Siemens PreScan and ESI Pro-SiVIC. Based on a minimalistic scene, we compare critical test scenarios generated using our SBST solution in these two simulators. We show that SBST can be used to effectively and efficiently generate critical test scenarios in both simulators, and the test results obtained from the two simulators can reveal several weaknesses of the ADAS under test. However, executing the same test scenarios in the two simulators leads to notable differences in the details of the test outputs, in particular, related to (1) safety violations revealed by tests, and (2) dynamics of cars and pedestrians. Based on our findings, we recommend future V&V plans to include multiple simulators to support robust simulation-based testing and to base test objectives on measures that are less dependant on the internals of the simulators.Comment: To appear in the Proc. of the IEEE International Conference on Software Testing, Verification and Validation (ICST) 202

    When less is more: Robot swarms adapt better to changes with constrained communication

    Get PDF
    To effectively perform collective monitoring of dynamic environments, a robot swarm needs to adapt to changes by processing the latest information and discarding outdated beliefs. We show that in a swarm composed of robots relying on local sensing, adaptation is better achieved if the robots have a shorter rather than longer communication range. This result is in contrast with the widespread belief that more communication links always improve the information exchange on a network. We tasked robots with reaching agreement on the best option currently available in their operating environment. We propose a variety of behaviors composed of reactive rules to process environmental and social information. Our study focuses on simple behaviors based on the voter model—a well-known minimal protocol to regulate social interactions—that can be implemented in minimalistic machines. Although different from each other, all behaviors confirm the general result: The ability of the swarm to adapt improves when robots have fewer communication links. The average number of links per robot reduces when the individual communication range or the robot density decreases. The analysis of the swarm dynamics via mean-field models suggests that our results generalize to other systems based on the voter model. Model predictions are confirmed by results of multiagent simulations and experiments with 50 Kilobot robots. Limiting the communication to a local neighborhood is a cheap decentralized solution to allow robot swarms to adapt to previously unknown information that is locally observed by a minority of the robots

    The implications of embodiment for behavior and cognition: animal and robotic case studies

    Full text link
    In this paper, we will argue that if we want to understand the function of the brain (or the control in the case of robots), we must understand how the brain is embedded into the physical system, and how the organism interacts with the real world. While embodiment has often been used in its trivial meaning, i.e. 'intelligence requires a body', the concept has deeper and more important implications, concerned with the relation between physical and information (neural, control) processes. A number of case studies are presented to illustrate the concept. These involve animals and robots and are concentrated around locomotion, grasping, and visual perception. A theoretical scheme that can be used to embed the diverse case studies will be presented. Finally, we will establish a link between the low-level sensory-motor processes and cognition. We will present an embodied view on categorization, and propose the concepts of 'body schema' and 'forward models' as a natural extension of the embodied approach toward first representations.Comment: Book chapter in W. Tschacher & C. Bergomi, ed., 'The Implications of Embodiment: Cognition and Communication', Exeter: Imprint Academic, pp. 31-5
    • 

    corecore