5 research outputs found

    ENHANCED UAV NAVIGATION USING HALL-MAGNETIC AND AIR-MASS FLOW SENSORS IN INDOOR ENVIRONMENT

    Get PDF
    The use of Unmanned Aerial Vehicles (UAVs) in many commercial and emergency applications has the potential to dramatically alter several industries, and, in the process, change our attitudes regarding their impact on our daily lives activities. The navigation system of these UAVs mainly depends on the integration between the Global Navigation Satellite Systems (GNSS) and Inertial Navigation System (INS) to estimate the positions, velocities, and attitudes (PVT) of the UAVs. However, GNSS signals are not always available everywhere and therefore during GNSS signal outages, the navigation system performance will deteriorate rapidly especially when using low-cost INS. Additional aiding sensors are required, during GNSS signal outages, to bound the INS errors and enhance the navigation system performance. This paper proposes the utilization of two sensors (Hall-magnetic and Air-Mass flow sensors) to act as flying odometer by estimating the UAV forward velocity. The estimated velocity is then integrated with INS through Extended Kalman Filter (EKF) to enhance the navigation solution estimation. A real experiment was carried out with the 3DR quadcopter while the proposed system is attached on the top of the quadcopter. The results showed great enhancement in the navigation system performance with more than 98% improvement when compared to the free running INS solution (dead-reckoning)

    Autonomous navigation for guide following in crowded indoor environments

    No full text
    The requirements for assisted living are rapidly changing as the number of elderly patients over the age of 60 continues to increase. This rise places a high level of stress on nurse practitioners who must care for more patients than they are capable. As this trend is expected to continue, new technology will be required to help care for patients. Mobile robots present an opportunity to help alleviate the stress on nurse practitioners by monitoring and performing remedial tasks for elderly patients. In order to produce mobile robots with the ability to perform these tasks, however, many challenges must be overcome. The hospital environment requires a high level of safety to prevent patient injury. Any facility that uses mobile robots, therefore, must be able to ensure that no harm will come to patients whilst in a care environment. This requires the robot to build a high level of understanding about the environment and the people with close proximity to the robot. Hitherto, most mobile robots have used vision-based sensors or 2D laser range finders. 3D time-of-flight sensors have recently been introduced and provide dense 3D point clouds of the environment at real-time frame rates. This provides mobile robots with previously unavailable dense information in real-time. I investigate the use of time-of-flight cameras for mobile robot navigation in crowded environments in this thesis. A unified framework to allow the robot to follow a guide through an indoor environment safely and efficiently is presented. Each component of the framework is analyzed in detail, with real-world scenarios illustrating its practical use. Time-of-flight cameras are relatively new sensors and, therefore, have inherent problems that must be overcome to receive consistent and accurate data. I propose a novel and practical probabilistic framework to overcome many of the inherent problems in this thesis. The framework fuses multiple depth maps with color information forming a reliable and consistent view of the world. In order for the robot to interact with the environment, contextual information is required. To this end, I propose a region-growing segmentation algorithm to group points based on surface characteristics, surface normal and surface curvature. The segmentation process creates a distinct set of surfaces, however, only a limited amount of contextual information is available to allow for interaction. Therefore, a novel classifier is proposed using spherical harmonics to differentiate people from all other objects. The added ability to identify people allows the robot to find potential candidates to follow. However, for safe navigation, the robot must continuously track all visible objects to obtain positional and velocity information. A multi-object tracking system is investigated to track visible objects reliably using multiple cues, shape and color. The tracking system allows the robot to react to the dynamic nature of people by building an estimate of the motion flow. This flow provides the robot with the necessary information to determine where and at what speeds it is safe to drive. In addition, a novel search strategy is proposed to allow the robot to recover a guide who has left the field-of-view. To achieve this, a search map is constructed with areas of the environment ranked according to how likely they are to reveal the guide’s true location. Then, the robot can approach the most likely search area to recover the guide. Finally, all components presented are joined to follow a guide through an indoor environment. The results achieved demonstrate the efficacy of the proposed components

    Human robot interaction in a crowded environment

    No full text
    Human Robot Interaction (HRI) is the primary means of establishing natural and affective communication between humans and robots. HRI enables robots to act in a way similar to humans in order to assist in activities that are considered to be laborious, unsafe, or repetitive. Vision based human robot interaction is a major component of HRI, with which visual information is used to interpret how human interaction takes place. Common tasks of HRI include finding pre-trained static or dynamic gestures in an image, which involves localising different key parts of the human body such as the face and hands. This information is subsequently used to extract different gestures. After the initial detection process, the robot is required to comprehend the underlying meaning of these gestures [3]. Thus far, most gesture recognition systems can only detect gestures and identify a person in relatively static environments. This is not realistic for practical applications as difficulties may arise from people‟s movements and changing illumination conditions. Another issue to consider is that of identifying the commanding person in a crowded scene, which is important for interpreting the navigation commands. To this end, it is necessary to associate the gesture to the correct person and automatic reasoning is required to extract the most probable location of the person who has initiated the gesture. In this thesis, we have proposed a practical framework for addressing the above issues. It attempts to achieve a coarse level understanding about a given environment before engaging in active communication. This includes recognizing human robot interaction, where a person has the intention to communicate with the robot. In this regard, it is necessary to differentiate if people present are engaged with each other or their surrounding environment. The basic task is to detect and reason about the environmental context and different interactions so as to respond accordingly. For example, if individuals are engaged in conversation, the robot should realize it is best not to disturb or, if an individual is receptive to the robot‟s interaction, it may approach the person. Finally, if the user is moving in the environment, it can analyse further to understand if any help can be offered in assisting this user. The method proposed in this thesis combines multiple visual cues in a Bayesian framework to identify people in a scene and determine potential intentions. For improving system performance, contextual feedback is used, which allows the Bayesian network to evolve and adjust itself according to the surrounding environment. The results achieved demonstrate the effectiveness of the technique in dealing with human-robot interaction in a relatively crowded environment [7]

    Pose estimation and data fusion algorithms for an autonomous mobile robot based on vision and IMU in an indoor environment

    Get PDF
    Thesis (PhD(Computer Engineering))--University of Pretoria, 2021.Autonomous mobile robots became an active research direction during the past few years, and they are emerging in different sectors such as companies, industries, hospital, institutions, agriculture and homes to improve services and daily activities. Due to technology advancement, the demand for mobile robot has increased due to the task they perform and services they render such as carrying heavy objects, monitoring, delivering of goods, search and rescue missions, performing dangerous tasks in places like underground mines. Instead of workers being exposed to hazardous chemicals or environments that could affect health and put lives at risk, humans are being replaced with mobile robot services. It is with these concerns that the enhancement of mobile robot operation is necessary, and the process is assisted through sensors. Sensors are used as instrument to collect data or information that aids the robot to navigate and localise in its environment. Each sensor type has inherent strengths and weaknesses, therefore inappropriate combination of sensors could result into high cost of sensor deployment with low performance. Regardless, the potential and prospect of autonomous mobile robot, they are yet to attain optimal performance, this is because of integral challenges they are faced with most especially localisation. Localisation is one the fundamental issues encountered in mobile robot which demands attention and the challenging part is estimating the robot position and orientation of which this information can be acquired from sensors and other relevant systems. To tackle the issue of localisation, a good technique should be proposed to deal with errors, downgrading factors, improper measurement and estimations. Different approaches are recommended in estimating the position of a mobile robot. Some studies estimated the trajectory of the mobile robot and indoor scene reconstruction using a monocular visual odmometry. This approach cannot be feasible for large zone and complex environment. Radio frequency identification (RFID) technology on the other hand provides accuracy and robustness, but the method depend on the distance between the tags, and the distance between the tags and the reader. To increase the localisation accuracy, the number of RFID tags per unit area has to be increased. Therefore, this technique may not result in economical and easily scalable solution because of the increasing number of required tags and the associated cost of their deployment. Global Positioning System (GPS) is another approach that offers proved results in most scenarios, however, indoor localization is one of the settings in which GPS cannot be used because the signal strength is not reliable inside a building. Most approaches are not able to precisely localise autonomous mobile robot even with the high cost of equipment and complex implementation. Most the devices and sensors either requires additional infrastructures or they are not suitable to be used in an indoor environment. Therefore, this study proposes using data from vision and inertial sensors which comprise 3-axis of accelerometer and 3-axis of gyroscope, also known as 6-degree of freedom (6-DOF) to determine pose estimation of mobile robot. The inertial measurement unit (IMU) based tracking provides fast response, therefore, they can be considered to assist vision whenever it fails due to loss of visual features. The use of vision sensor helps to overcome the characteristic limitation of the acoustic sensor for simultaneous multiple object tracking. With this merit, vision is capable of estimating pose with respect to the object of interest. A singular sensor or system is not reliable to estimate the pose of a mobile robot due to limitations, therefore, data acquired from sensors and sources are combined using data fusion algorithm to estimate position and orientation within specific environment. The resulting model is more accurate because it balances the strengths of the different sensors. Information provided through sensor or data fusion can be used to support more-intelligent actions. The proposed algorithms are expedient to combine data from each of the sensor types to provide the most comprehensive and accurate environmental model possible. The algorithms use a set of mathematical equations that provides an efficient computational means to estimate the state of a process. This study investigates the state estimation methods to determine the state of a desired system that is continuously changing given some observations or measurements. From the performance and evaluation of the system, it can be observed that the integration of sources of information and sensors is necessary. This thesis has provided viable solutions to the challenging problem of localisation in autonomous mobile robot through its adaptability, accuracy, robustness and effectiveness.NRFUniversity of PretoriaElectrical, Electronic and Computer EngineeringPhD(Computer Engineering)Unrestricte

    AWARE Project. Integration of Unmanned Aerial Vehicles with Wireless Sensor/Actuator Networks

    Get PDF
    [EN] This paper summarizes the results of the AWARE project coordinated by the Robotics, Vision and Control Group (GRVC) of the University of Seville and funded by the European Commission under the VI Framework Program (IST-2006-33579). The paper briefly describes the architecture developed for the autonomous decentralized cooperation between unmanned aerial vehicles, wireless sensor/actuator networks and ground camera networks. The full approach was validated in field experiments with different autonomous helicopters equipped with heterogeneous devices on-board, such as visual/infrared cameras and instruments to transport loads and to deploy sensors.[ES] En este artículo se resumen los resultados obtenidos en el proyecto AWARE coordinado por el Grupo de Robótica Visión y Control (GRVC) de la Universidad de Sevilla y financiado por la Comisión Europea en el VI Programa Marco (IST-2006-33579). El artículo describe brevemente la arquitectura desarrollada para la cooperación autónoma descentralizada entre vehículos aéreos no tripulados, redes inalámbricas de sensores/actuadores y redes de cámaras en tierra. Todo el desarrollo fue validado mediante experimentos de campo con helicópteros equipados con dispositivos heterogéneos tales como cámaras visuales y de infrarrojos, e instrumentos para el transporte y despliegue de sensores y otras cargas.El trabajo mostrado ha sido desarrollado en el marco del proyecto AWARE (Platform for Autonomous self-deploying and operation of Wireless sensor-actuator networks cooperating with AeRial objEcts), financiado por la Comisión Europea en el pro-grama IST del VI Programa Marco (IST-2006-33579).Ollero, A.; Maza, I.; Rodríguez Castaño, A.; Martínez De Dios, J.; Caballero, F.; Capitán, J. (2012). Proyecto AWARE. Integración de Vehículos Aéreos no Tripulados con Redes Inalámbricas de Sensores y Actuadores. Revista Iberoamericana de Automática e Informática industrial. 9(1):46-56. https://doi.org/10.1016/j.riai.2011.11.007OJS465691Akyildiz, I. F., Weilian Su, Sankarasubramaniam, Y., & Cayirci, E. (2002). A survey on sensor networks. IEEE Communications Magazine, 40(8), 102-114. doi:10.1109/mcom.2002.1024422Bernard, M., Kondak, K., Maza, I., Ollero, A., 2011. Autonomous transportation and deployment with aerial robots for search and rescue missions. Journal of Field Robotics 28 (6), 914-931. URL: http://dx.doi.org/10.1002/rob.20401 DOI: 10.1002/rob.20401.Caballero, F., Merino, L., Ferruz, J., & Ollero, A. (2009). Unmanned Aerial Vehicle Localization Based on Monocular Vision and Online Mosaicking. Journal of Intelligent and Robotic Systems, 55(4-5), 323-343. doi:10.1007/s10846-008-9305-7Caballero, F., Merino, L., Gil, P., Maza, I., Ollero, A., 2008a. A probabilistic framework for entire WSN localization using a mobile robot. Robotics and Autonomous Systems 56 (10), 798-806. URL: http://dx.doi.org/10.1016/j.robot.2008.06.003 DOI: 10.1016/j.robot.2008.06.003.Caballero, F., Merino, L., Maza, I., Ollero, A., 2008b. A particle filtering method for wireless sensor network localization with an aerial robot beacon. In: Proceedings of the IEEE International Conference on Robotics and Automation. Pasadena, California, USA, pp. 596-601. URL: http://dx.doi.org/10.1109/ROBOT. 2008.4543271 DOI: 10.1109/ROBOT. 2008.4543271.Capitan, J., Merino, L., Caballero, F., Ollero, A., 2009. Delayed-State Information Filter for Cooperative Decentralized Tracking. In: Proceedings of the IEEE International Conference on Robotics and Automation. Kobe, Japan, pp. 3865-3870.Capitán, J., Merino, L., Caballero, F., Ollero, A., June 2011. Decentralized delayed-state information filter (ddsif): A new approach for cooperative decentralized tracking. Robotics and Autonomous Systems 56 (6), 376-388. DOI: http://dx.doi.org/10.1016/j.robot.2011.02.001.Dias, M.B., Stentz, A., 2002. Opportunistic optimization for market-based multirobot control. In: Proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems. Lausanne, Switzerland, pp. 2714-2720.Erman, A., Hoesel, L., Havinga, P., & Wu, J. (2008). Enabling mobility in heterogeneous wireless sensor networks cooperating with UAVs for mission-critical management. IEEE Wireless Communications, 15(6), 38-46. doi:10.1109/mwc.2008.4749746Gerkey, B. P., & Mataric, M. J. (2002). Sold!: auction methods for multirobot coordination. IEEE Transactions on Robotics and Automation, 18(5), 758-768. doi:10.1109/tra.2002.803462Gil, P., Maza, I., Ollero, A., Marron, P.J., 2007. Data centric middleware for the integration of wireless sensor networks and mobile robots. In: Proceedings of the 7th Conference On Mobile Robots And Competitions. Paderne, Portugal.Maza, I., Caballero, F., Capitan, J., de Dios, J.M., Ollero, A., July 2010a. Firemen monitoring with multiple UAVs for search and rescue missions. In: Proc. of the IEEE International Workshop on Safety, Security and Rescue Robotics. Bremen, Germany, pp. 1-6. URL: http://dx.doi.org/10.1109/SSRR. 2010.5981565 DOI: 10.1109/SSRR. 2010.5981565.Maza, I., Caballero, F., Capitan, J., de Dios, J.M., Ollero, A., 2011a. A distributed architecture for a robotic platform with aerial sensor transportation and self-deployment capabilities. Journal of Field Robotics 28 (3), 303-328. URL: http://dx.doi.org/10.1109/SSRR. 2010.5981565 DOI: 10.1109/SSRR. 2010.5981565.Maza, I., Caballero, F., Capitan, J., de Dios, J.M., Ollero, A., January 2011b. Experimental results in multi-UAV coordination for disaster management and civil security applications. Journal of Intelligent and Robotic Systems 61 (1), 563-585. URL: http://dx.doi.org/10.1007/s10846-010-9497-5 DOI: 10.1007/s10846-010-9497-5.Maza, I., Caballero, F., Molina, R., na, N.P., Ollero, A., 2010b. Multimodal interface technologies for UAV ground control stations. a comparative analysis. Journal of Intelligent and Robotic Systems 57 (1-4), 371-391. URL: http://dx.doi.org/10.1007/s10846-009-9351-9 DOI: 10.1007/s10846-009-9351-9.Maza, I., Ollero, A., 2007. Distributed Autonomous Robotic Systems 6. Vol. 6 of Distributed Autonomous Robotic Systems. Springer Verlag, Ch. Multiple UAV cooperative searching operation using polygon area decomposition and e_cient coverage algorithms, pp. 221-230.Ollero, A., Lacroix, S., Merino, L., Gancet, J., Wiklund, J., Remuss, V., … Caballero, F. (2005). Multiple eyes in the skies - Architecture and perception issues in the comets unmanned air vehicles project. IEEE Robotics & Automation Magazine, 12(2), 46-57. doi:10.1109/mra.2005.1458323Sanchez-Matamoros, J.M., Dios, J.R.M.-d., Ollero, A., april 2009. Cooperative localization and tracking with a camera-based wsn. In: Mechatronics, 2009. ICM 2009. IEEE International Conference on. pp. 1-6.Sukkarieh, S., Nettleton, E., Kim, J.-H., Ridley, M., Goktogan, A., & Durrant-Whyte, H. (2003). The ANSER Project: Data Fusion Across Multiple Uninhabited Air Vehicles. The International Journal of Robotics Research, 22(7-8), 505-539. doi:10.1177/02783649030227005Viguria, A., Maza, I., Ollero, A., April 2007. SET: An algorithm for distributed multirobot task allocation with dynamic negotiation based on task subsets. In: Proceedings of the IEEE International Conference on Robotics and Automation. Rome, Italy, pp. 3339-3344. URL: http://dx.doi.org/10.1109/ROBOT. 2007.363988 DOI: 10.1109/ROBOT. 2007.363988.Viguria, A., Maza, I., Ollero, A., 2008. S+T: An algorithm for distributed multirobot task allocation based on services for improving robot cooperation. In: Proceedings of the IEEE International Conference on Robotics and Automation. Pasadena, California, USA, pp. 3163-3168. URL: http://dx.doi.org/10.1109/ROBOT. 2008.4543692 DOI: 10.1109/ROBOT. 2008.4543692.Woo, A., Tong, T., Culler, D., 2003. Taming the underlying challenges of reliable multihop routing in sensor networks. In: 1st ACM Intl.Conf.on Embedded networked sensor systems. pp. 14-27. DOI: http://dx.doi.org/10.1145/958491.958494.Xu, Y., Scerri, P., Sycara, K., Lewis, M., 2006. Comparing market and tokenbased coordination. In: International Joint Conference on Autonomous Agents and Multiagent Systems
    corecore