146 research outputs found

    A review on technologies for localisation and navigation in autonomous railway maintenance systems

    Get PDF
    Smart maintenance is essential to achieving a safe and reliable railway, but traditional maintenance deployment is costly and heavily human-involved. Ineffective job execution or failure in preventive maintenance can lead to railway service disruption and unsafe operations. The deployment of robotic and autonomous systems was proposed to conduct these maintenance tasks with higher accuracy and reliability. In order for these systems to be capable of detecting rail flaws along millions of mileages they must register their location with higher accuracy. A prerequisite of an autonomous vehicle is its possessing a high degree of accuracy in terms of its positional awareness. This paper first reviews the importance and demands of preventive maintenance in railway networks and the related techniques. Furthermore, this paper investigates the strategies, techniques, architecture, and references used by different systems to resolve the location along the railway network. Additionally, this paper discusses the advantages and applicability of on-board-based and infrastructure-based sensing, respectively. Finally, this paper analyses the uncertainties which contribute to a vehicle’s position error and influence on positioning accuracy and reliability with corresponding technique solutions. This study therefore provides an overall direction for the development of further autonomous track-based system designs and methods to deal with the challenges faced in the railway network.European Union’s Horizon 2020 research and innovation programme. Shift2Rail Joint Undertaking (JU): 88157

    Localization, Navigation and Activity Planning for Wheeled Agricultural Robots – A Survey

    Get PDF
    Source at:https://fruct.org/publications/volume-32/fruct32/High cost, time intensive work, labor shortages and inefficient strategies have raised the need of employing mobile robotics to fully automate agricultural tasks and fulfil the requirements of precision agriculture. In order to perform an agricultural task, the mobile robot goes through a sequence of sub operations and integration of hardware and software systems. Starting with localization, an agricultural robot uses sensor systems to estimate its current position and orientation in field, employs algorithms to find optimal paths and reach target positions. It then uses techniques and models to perform feature recognition and finally executes the agricultural task through an end effector. This article, compiled through scrutinizing the current literature, is a step-by-step approach of the strategies and ways these sub-operations are performed and integrated together. An analysis has also been done on the limitations in each sub operation, available solutions, and the ongoing research focus

    Data fusion in agriculture: resolving ambiguities and closing data gaps.

    Get PDF
    Abstract. Acquiring useful data from agricultural areas has always been somewhat of a challenge, as these are often expansive, remote, and vulnerable to weather events. Despite these challenges, as technologies evolve and prices drop, a surge of new data are being collected. Although a wealth of data are being collected at different scales (i.e., proximal, aerial, satellite, ancillary data), this has been geographically unequal, causing certain areas to be virtually devoid of useful data to help face their specific challenges. However, even in areas with available resources and good infrastructure, data and knowledge gaps are still prevalent, because agricultural environments are mostly uncontrolled and there are vast numbers of factors that need to be taken into account and properly measured for a full characterization of a given area. As a result, data from a single sensor type are frequently unable to provide unambiguous answers, even with very effective algorithms, and even if the problem at hand is well defined and limited in scope. Fusing the information contained in different sensors and in data from different types is one possible solution that has been explored for some decades. The idea behind data fusion involves exploring complementarities and synergies of different kinds of data in order to extract more reliable and useful information about the areas being analyzed. While some success has been achieved, there are still many challenges that prevent a more widespread adoption of this type of approach. This is particularly true for the highly complex environments found in agricultural areas. In this article, we provide a comprehensive overview on the data fusion applied to agricultural problems; we present the main successes, highlight the main challenges that remain, and suggest possible directions for future research.Article number: 2285

    A review on challenges of autonomous mobile robot and sensor fusion methods

    Get PDF
    Autonomous mobile robots are becoming more prominent in recent time because of their relevance and applications to the world today. Their ability to navigate in an environment without a need for physical or electro-mechanical guidance devices has made it more promising and useful. The use of autonomous mobile robots is emerging in different sectors such as companies, industries, hospital, institutions, agriculture and homes to improve services and daily activities. Due to technology advancement, the demand for mobile robot has increased due to the task they perform and services they render such as carrying heavy objects, monitoring, search and rescue missions, etc. Various studies have been carried out by researchers on the importance of mobile robot, its applications and challenges. This survey paper unravels the current literatures, the challenges mobile robot is being faced with. A comprehensive study on devices/sensors and prevalent sensor fusion techniques developed for tackling issues like localization, estimation and navigation in mobile robot are presented as well in which they are organised according to relevance, strengths and weaknesses. The study therefore gives good direction for further investigation on developing methods to deal with the discrepancies faced with autonomous mobile robot.http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=6287639pm2021Electrical, Electronic and Computer Engineerin

    Safe navigation and human-robot interaction in assistant robotic applications

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Sensor fusion-based localization methods for mobile robots: A case study for wheeled robots

    Get PDF
    Localization aims to provide the best estimate of the robot pose. It is a crucial algorithm in every robotics application, since its output directly determines the inputs of the robot to be controlled in its configuration space. In real world of engineering, the robot dynamics related measurements are subject to both uncertainties and disturbances. These error sources yield unreliable inferences of the robot state, which inherently result in wrong consensus about the appropriate control strategy to be applied. This outcome may drive the system out of stability and damage both the physical system and its environment. The localization algorithm captures the uncertainties with probabilistic approaches. Namely, the measurement processes are modelled along with their unreliability, moreover, the synergy of multiple information sources is formulated with the aim to calculate the most probable estimate of the robot pose. In essence, this algorithm is composed of two main parts, i.e., first the dynamics of the system is derived, and the corresponding uncertainties are initially predicted, next the additional sensor information is incorporated in the algorithm to refine the posterior estimate. This approach provides the state-of-the-art solution for the derivation of mobile robot poses in real applications

    Sensor fusion-based localization methods for mobile robots : a case study for wheeled robots

    Get PDF
    Localization aims to provide the best estimate of the robot pose. It is a crucial algorithm in every robotics application, since its output directly determines the inputs of the robot to be controlled in its configuration space. In real world of engineering, the robot dynamics related measurements are subject to both uncertainties and disturbances. These error sources yield unreliable inferences of the robot state, which inherently result in wrong consensus about the appropriate control strategy to be applied. This outcome may drive the system out of stability and damage both the physical system and its environment. The localization algorithm captures the uncertainties with probabilistic approaches. Namely, the measurement processes are modelled along with their unreliability, moreover, the synergy of multiple information sources is formulated with the aim to calculate the most probable estimate of the robot pose. In essence, this algorithm is composed of two main parts, i.e., first the dynamics of the system is derived, and the corresponding uncertainties are initially predicted, next the additional sensor information is incorporated in the algorithm to refine the posterior estimate. This approach provides the state-of-the-art solution for the derivation of mobile robot poses in real applications

    State of the art in vision-based localization techniques for autonomous navigation systems

    Get PDF

    Investigations of closed source registration method of depth sensor technologies for human-robot collaboration

    Get PDF
    Productive teaming is the new form of human-robot interaction. The multimodal 3D imaging has a key role in this to gain a more comprehensive understanding of production system as well as to enable trustful collaboration from the teams. For a complete scene capture, the registration of the image modalities is required. Currently, low-cost RGB-D sensors are often used. These come with a closed source registration function. In order to have an efficient and freely available method for any sensors, we have developed a new method, called Triangle-Mesh-Rasterization-Projection (TMRP). To verify the performance of our method, we compare it with the closed-source projection function of the Azure Kinect Sensor (Microsoft). The qualitative comparison showed that both methods produce almost identical results. Minimal differences at the edges indicate that our TMRP interpolation is more accurate. With our method, a freely available open-source registration method is now available that can be applied to almost any multimodal 3D/2D image dataset and is not like the Microsoft SDK optimized for Microsoft products

    Robots learn to behave: improving human-robot collaboration in flexible manufacturing applications

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen
    • …
    corecore