126 research outputs found

    Multisensor navigation systems: a remedy for GNSS vulnerabilities?

    Get PDF
    Space-based positioning, navigation, and timing (PNT) technologies, such as the global navigation satellite systems (GNSS) provide position, velocity, and timing information to an unlimited number of users around the world. In recent years, PNT information has become increasingly critical to the security, safety, and prosperity of the World's population, and is now widely recognized as an essential element of the global information infrastructure. Due to its vulnerabilities and line-of-sight requirements, GNSS alone is unable to provide PNT with the required levels of integrity, accuracy, continuity, and reliability. A multisensor navigation approach offers an effective augmentation in GNSS-challenged environments that holds a promise of delivering robust and resilient PNT. Traditionally, sensors such as inertial measurement units (IMUs), barometers, magnetometers, odometers, and digital compasses, have been used. However, recent trends have largely focused on image-based, terrain-based and collaborative navigation to recover the user location. This paper offers a review of the technological advances that have taken place in PNT over the last two decades, and discusses various hybridizations of multisensory systems, building upon the fundamental GNSS/IMU integration. The most important conclusion of this study is that in order to meet the challenging goals of delivering continuous, accurate and robust PNT to the ever-growing numbers of users, the hybridization of a suite of different PNT solutions is required

    On the Enhancement of the Localization of Autonomous Mobile Platforms

    Get PDF
    The focus of many industrial and research entities on achieving full robotic autonomy increased in the past few years. In order to achieve full robotic autonomy, a fundamental problem is the localization, which is the ability of a mobile platform to determine its position and orientation in the environment. In this thesis, several problems related to the localization of autonomous platforms are addressed, namely, visual odometry accuracy and robustness; uncertainty estimation in odometries; and accurate multi-sensor fusion-based localization. Beside localization, the control of mobile manipulators is also tackled in this thesis. First, a generic image processing pipeline is proposed which, when integrated with a feature-based Visual Odometry (VO), can enhance robustness, accuracy and reduce the accumulation of errors (drift) in the pose estimation. Afterwards, since odometries (e.g. wheel odometry, LiDAR odometry, or VO) suffer from drift errors due to integration, and because such errors need to be quantified in order to achieve accurate localization through multi-sensor fusion schemes (e.g. extended or unscented kalman filters). A covariance estimation algorithm is proposed, which estimates the uncertainty of odometry measurements using another sensor which does not rely on integration. Furthermore, optimization-based multi-sensor fusion techniques are known to achieve better localization results compared to filtering techniques, but with higher computational cost. Consequently, an efficient and generic multi-sensor fusion scheme, based on Moving Horizon Estimation (MHE), is developed. The proposed multi-sensor fusion scheme: is capable of operating with any number of sensors; and considers different sensors measurements rates, missing measurements, and outliers. Moreover, the proposed multi-sensor scheme is based on a multi-threading architecture, in order to reduce its computational cost, making it more feasible for practical applications. Finally, the main purpose of achieving accurate localization is navigation. Hence, the last part of this thesis focuses on developing a stabilization controller of a 10-DOF mobile manipulator based on Model Predictive Control (MPC). All of the aforementioned works are validated using numerical simulations; real data from: EU Long-term Dataset, KITTI Dataset, TUM Dataset; and/or experimental sequences using an omni-directional mobile robot. The results show the efficacy and importance of each part of the proposed work

    Event based localization in Ackermann steering limited resource mobile robots

    Full text link
    “© 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.”This paper presents a local sensor fusion technique with an event-based global position correction to improve the localization of a mobile robot with limited computational resources. The proposed algorithms use a modified Kalman filter and a new local dynamic model of an Ackermann steering mobile robot. It has a similar performance but faster execution when compared to more complex fusion schemes, allowing its implementation inside the robot. As a global sensor, an event-based position correction is implemented using the Kalman filter error covariance and the position measurement obtained from a zenithal camera. The solution is tested during a long walk with different trajectories using a LEGO Mindstorm NXT robot.This work was supported by FEDER-CICYT projects with references DPI2011-28507-C02-01 and DPI2010-20814-C02-02, financed by the Ministerio de Ciencia e Innovacion (Spain). This work was also supported by the University of Costa Rica.Marín, L.; Vallés Miquel, M.; Soriano Vigueras, Á.; Valera Fernández, Á.; Albertos Pérez, P. (2014). Event based localization in Ackermann steering limited resource mobile robots. IEEE/ASME Transactions on Mechatronics. 19(4):1171-1182. doi:10.1109/TMECH.2013.2277271S1171118219

    A Comprehensive Review on Autonomous Navigation

    Full text link
    The field of autonomous mobile robots has undergone dramatic advancements over the past decades. Despite achieving important milestones, several challenges are yet to be addressed. Aggregating the achievements of the robotic community as survey papers is vital to keep the track of current state-of-the-art and the challenges that must be tackled in the future. This paper tries to provide a comprehensive review of autonomous mobile robots covering topics such as sensor types, mobile robot platforms, simulation tools, path planning and following, sensor fusion methods, obstacle avoidance, and SLAM. The urge to present a survey paper is twofold. First, autonomous navigation field evolves fast so writing survey papers regularly is crucial to keep the research community well-aware of the current status of this field. Second, deep learning methods have revolutionized many fields including autonomous navigation. Therefore, it is necessary to give an appropriate treatment of the role of deep learning in autonomous navigation as well which is covered in this paper. Future works and research gaps will also be discussed

    Implementing and Tuning an Autonomous Racing Car Testbed

    Get PDF
    Achieving safe autonomous driving is far from a vision at present days, with many examples like Uber, Google and the most famous of all Tesla, as they successfully deployed self driving cars around the world. Researchers and engineers have been putting tremendous efforts and will continue to do so in the following years into developing safe and precise control algorithms and technologies that will be included in future self driving cars. Besides these well known autonomous car deployments, some focus has also been put into autonomous racing competitions, for example the Roborace. The fact is that although significant progress that has been made, testing on real size cars in real environments requires immense financial support, making it impossible for many research groups to enter the game. Consequently, interesting alternatives appeared, such as the F1 Tenth, which challenges students, researchers and engineers to embrace in a low cost autonomous racing competition while developing control algorithms, that rely on sensors and strategies used in real life applications. This thesis focus on the comparison of different control algorithms and their effectiveness, that are present in a racing aspect of the F1 Tenth competition. In this thesis, efforts were put into developing a robotic autonomous car, relying on Robot Operative System, ROS, that not only meet the specifications from the F1 Tenth rules, but also allowed to establish a testbed for different future autonomous driving research.Obter uma condução autónoma segura está longe de uma visão dos dias de hoje, com exemplos como a Uber, Google e o mais famoso deles todos, a Tesla, que já foram globalmente introduzidos com sucesso. Investigadores e engenheiros têm colocado um empenho tremendo e vão continuar a fazê-lo nos próximos anos, a desenvolver algoritmos de controlo precisos e seguros, bem como tecnologias que serão colocados nos carros autónomos do futuro. Para além destes casos de sucesso bem conhecidos, algum foco tem sido colocado em competições de corridas de carros autónomos, como por exemplo o Roborace. O facto ´e que apesar do progresso significante que tem sido feito, fazer testes em carros reais em cenários verdadeiros, requer grande investimento financeiro, tornando impossível para muitos grupos de investigação investir na área. Consequentemente, apareceram alternativas relevantes, tal como o F1 Tenth, que desafia estudantes, investigadores e engenheiros a aderir a uma competição de baixos custos de corridas autónomas, enquanto desenvolvem algoritmos de controlo, que dependem de sensores e estratégias usadas em aplicações reais. Esta tese foca-se na comparação de diferentes algoritmos de controlo e na eficácia dos mesmos, que estão presentes num cenário de corrida da competição do F1 Tenth. Nesta tese, foram colocados muitos esforços para o desenvolvimento de um carro autónomo robótico, baseado em Robot Operative System, ROS, que não só vai de encontro `as especificações do F1 Tenth, mas que também permita estabelecer uma plataforma para futuras investigações de condução autónoma

    From SLAM to Situational Awareness: Challenges and Survey

    Get PDF
    The knowledge that an intelligent and autonomous mobile robot has and is able to acquire of itself and the environment, namely the situation, limits its reasoning, decision-making, and execution skills to efficiently and safely perform complex missions. Situational awareness is a basic capability of humans that has been deeply studied in fields like Psychology, Military, Aerospace, Education, etc., but it has barely been considered in robotics, which has focused on ideas such as sensing, perception, sensor fusion, state estimation, localization and mapping, spatial AI, etc. In our research, we connected the broad multidisciplinary existing knowledge on situational awareness with its counterpart in mobile robotics. In this paper, we survey the state-of-the-art robotics algorithms, we analyze the situational awareness aspects that have been covered by them, and we discuss their missing points. We found out that the existing robotics algorithms are still missing manifold important aspects of situational awareness. As a consequence, we conclude that these missing features are limiting the performance of robotic situational awareness, and further research is needed to overcome this challenge. We see this as an opportunity, and provide our vision for future research on robotic situational awareness.Comment: 15 pages, 8 figure

    Information Aided Navigation: A Review

    Full text link
    The performance of inertial navigation systems is largely dependent on the stable flow of external measurements and information to guarantee continuous filter updates and bind the inertial solution drift. Platforms in different operational environments may be prevented at some point from receiving external measurements, thus exposing their navigation solution to drift. Over the years, a wide variety of works have been proposed to overcome this shortcoming, by exploiting knowledge of the system current conditions and turning it into an applicable source of information to update the navigation filter. This paper aims to provide an extensive survey of information aided navigation, broadly classified into direct, indirect, and model aiding. Each approach is described by the notable works that implemented its concept, use cases, relevant state updates, and their corresponding measurement models. By matching the appropriate constraint to a given scenario, one will be able to improve the navigation solution accuracy, compensate for the lost information, and uncover certain internal states, that would otherwise remain unobservable.Comment: 8 figures, 3 table

    Implementation of the autonomous functionalities on an electric vehicle platform for research and education

    Get PDF
    Self-driving cars have recently captured the attention of researchers and car manufacturing markets. Depending upon the level of autonomy, the cars are made capable of traversing from one point to another autonomously. In order to achieve this, sophisticated sensors need to be utilized. A complex set of algorithms is required to use the sensors data in order to navigate the vehicle along the desired trajectory. Polaris is an electric vehicle platform provided for research and education purposes at Aalto University. The primary focus of the thesis was to utilize all the sensors provided in Polaris to their full potential. So that, essential data from each sensor is made available to be further utilized either by a specific automation algorithm or by some mapping routine. For any autonomous robotic system, the first step towards automation is localization. That is to determine the current position of the robot in a given environment. Different sensors mounted over the platform provide such measurements in different frames of reference. The thesis utilizes the GPS based localization solution combined with the LiDAR data and wheel odometry to perform autonomous tasks. Robot Operating System is used as the software development tool in thesis work. Autonomous tasks include the determination of the global as well as the local trajectories. The endpoints of the global trajectories are dictated by the set of predefined GPS waypoints. This is called target-point navigation. A path needs to be planned that avoids all the obstacles. Based on the planned path, a set of velocity commands are issued by the embedded controller. The velocity commands are then fed to the actuators to move the vehicle along the planned trajectory

    Multi sensor fusion framework for indoor-outdoor localization of limited resource mobile robots

    Get PDF
    This paper presents a sensor fusion framework that improves the localization of mobile robots with limited computational resources. It employs an event based Kalman Filter to combine the measurements of a global sensor and an inertial measurement unit (IMU) on an event based schedule, using fewer resources (execution time and bandwidth) but with similar performance when compared to the traditional methods. The event is defined to reflect the necessity of the global information, when the estimation error covariance exceeds a predefined limit. The proposed experimental platforms are based on the LEGO Mindstorm NXT, and consist of a differential wheel mobile robot navigating indoors with a zenithal camera as global sensor, and an Ackermann steering mobile robot navigating outdoors with a SBG Systems GPS accessed through an IGEP board that also serves as datalogger. The IMU in both robots is built using the NXT motor encoders along with one gyroscope, one compass and two accelerometers from Hitecnic, placed according to a particle based dynamic model of the robots. The tests performed reflect the correct performance and low execution time of the proposed framework. The robustness and stability is observed during a long walk test in both indoors and outdoors environments.This work has been partially funded by FEDER-CICYT projects with references DPI2011-28507-C02-01 and DPI2010-20814-C02-02, financed by Ministerio de Ciencia e Innovacion (Spain). Also, the financial support from the University of Costa Rica is greatly appreciated.Marín, L.; Vallés Miquel, M.; Soriano Vigueras, Á.; Valera Fernández, Á.; Albertos Pérez, P. (2013). Multi sensor fusion framework for indoor-outdoor localization of limited resource mobile robots. Sensors. 13(10):14133-14160. doi:10.3390/s131014133S14133141601310http://en.wikibooks.org/wiki/Cyberbotics'_Robot_Curriculumhttp://www.cs.un-c.edu/welch/kalman/kalmanIntro.htmlJulier, S., Uhlmann, J., & Durrant-Whyte, H. F. (2000). A new method for the nonlinear transformation of means and covariances in filters and estimators. IEEE Transactions on Automatic Control, 45(3), 477-482. doi:10.1109/9.847726Pioneer Robots Online Informationhttp://www.mobilerobots.com/ResearchRobots.aspxHakyoung Chung, Ojeda, L., & Borenstein, J. (2001). Accurate mobile robot dead-reckoning with a precision-calibrated fiber-optic gyroscope. IEEE Transactions on Robotics and Automation, 17(1), 80-84. doi:10.1109/70.917085Jingang Yi, Hongpeng Wang, Junjie Zhang, Dezhen Song, Jayasuriya, S., & Jingtai Liu. (2009). Kinematic Modeling and Analysis of Skid-Steered Mobile Robots With Applications to Low-Cost Inertial-Measurement-Unit-Based Motion Estimation. IEEE Transactions on Robotics, 25(5), 1087-1097. doi:10.1109/tro.2009.2026506Hyun, D., Yang, H. S., Park, H.-S., & Kim, H.-J. (2010). Dead-reckoning sensor system and tracking algorithm for 3-D pipeline mapping. Mechatronics, 20(2), 213-223. doi:10.1016/j.mechatronics.2009.11.009Losada, C., Mazo, M., Palazuelos, S., Pizarro, D., & Marrón, M. (2010). Multi-Camera Sensor System for 3D Segmentation and Localization of Multiple Mobile Robots. Sensors, 10(4), 3261-3279. doi:10.3390/s100403261Fuchs, C., Aschenbruck, N., Martini, P., & Wieneke, M. (2011). Indoor tracking for mission critical scenarios: A survey. Pervasive and Mobile Computing, 7(1), 1-15. doi:10.1016/j.pmcj.2010.07.001Skog, I., & Handel, P. (2009). In-Car Positioning and Navigation Technologies—A Survey. IEEE Transactions on Intelligent Transportation Systems, 10(1), 4-21. doi:10.1109/tits.2008.2011712Kim, S. J., & Kim, B. K. (2013). Dynamic Ultrasonic Hybrid Localization System for Indoor Mobile Robots. IEEE Transactions on Industrial Electronics, 60(10), 4562-4573. doi:10.1109/tie.2012.2216235Boccadoro, M., Martinelli, F., & Pagnottelli, S. (2010). Constrained and quantized Kalman filtering for an RFID robot localization problem. Autonomous Robots, 29(3-4), 235-251. doi:10.1007/s10514-010-9194-zMadhavan, R., Fregene, K., & Parker, L. E. (2004). Distributed Cooperative Outdoor Multirobot Localization and Mapping. Autonomous Robots, 17(1), 23-39. doi:10.1023/b:auro.0000032936.24187.41Yunchun Yang, & Farrell, J. A. (2003). Magnetometer and differential carrier phase GPS-aided INS for advanced vehicle control. IEEE Transactions on Robotics and Automation, 19(2), 269-282. doi:10.1109/tra.2003.809591Zhang, T., & Xu, X. (2012). A new method of seamless land navigation for GPS/INS integrated system. Measurement, 45(4), 691-701. doi:10.1016/j.measurement.2011.12.021Shen, Z., Georgy, J., Korenberg, M. J., & Noureldin, A. (2011). Low cost two dimension navigation using an augmented Kalman filter/Fast Orthogonal Search module for the integration of reduced inertial sensor system and Global Positioning System. Transportation Research Part C: Emerging Technologies, 19(6), 1111-1132. doi:10.1016/j.trc.2011.01.001Kotecha, J. H., & Djuric, P. M. (2003). Gaussian particle filtering. IEEE Transactions on Signal Processing, 51(10), 2592-2601. doi:10.1109/tsp.2003.816758Seyboth, G. S., Dimarogonas, D. V., & Johansson, K. H. (2013). Event-based broadcasting for multi-agent average consensus. Automatica, 49(1), 245-252. doi:10.1016/j.automatica.2012.08.042Guinaldo, M., Fábregas, E., Farias, G., Dormido-Canto, S., Chaos, D., Sánchez, J., & Dormido, S. (2013). A Mobile Robots Experimental Environment with Event-Based Wireless Communication. Sensors, 13(7), 9396-9413. doi:10.3390/s130709396Meng, X., & Chen, T. (2013). Event based agreement protocols for multi-agent networks. Automatica, 49(7), 2125-2132. doi:10.1016/j.automatica.2013.03.002Campion, G., Bastin, G., & Dandrea-Novel, B. (1996). Structural properties and classification of kinematic and dynamic models of wheeled mobile robots. IEEE Transactions on Robotics and Automation, 12(1), 47-62. doi:10.1109/70.481750Ward, C. C., & Iagnemma, K. (2008). A Dynamic-Model-Based Wheel Slip Detector for Mobile Robots on Outdoor Terrain. IEEE Transactions on Robotics, 24(4), 821-831. doi:10.1109/tro.2008.924945Zohar, I., Ailon, A., & Rabinovici, R. (2011). Mobile robot characterized by dynamic and kinematic equations and actuator dynamics: Trajectory tracking and related application. Robotics and Autonomous Systems, 59(6), 343-353. doi:10.1016/j.robot.2010.12.001De La Cruz, C., & Carelli, R. (2008). Dynamic model based formation control and obstacle avoidance of multi-robot systems. Robotica, 26(3), 345-356. doi:10.1017/s0263574707004092Attia, H. A. (2005). Dynamic model of multi-rigid-body systems based on particle dynamics with recursive approach. Journal of Applied Mathematics, 2005(4), 365-382. doi:10.1155/jam.2005.365LEGO NXT Mindsensorshttp://www.mindsensors.comLEGO NXT HiTechnic Sensorshttp://www.hitechnic.com/sensorsLEGO 9V Technic Motors Compared Characteristicshttp://wwwphilohome.com/motors/motorcomp.htmIG-500N: GPS Aided Miniature INShttp://www.sbg-systems.com/products/ig500n-miniature-ins-gpsIGEPv2 Boardhttp://www.isee.biz/products/igep-processor-boards/igepv2-dm3730EKF/UKF Toolbox for Matlab V1.3http://www.lce.hut.fi/research/mm/ekfukf
    corecore