12 research outputs found

    Методика расчета одноосевого микро-опто-электро-механического преобразователя угловых скоростей

    Get PDF
    The method for calculating of single-axis micro-opto-elctro-mechanical (MOEM) angular velocity transducer based on the optical tunneling effect (OTE) is conceived. This paper is created the algorithm and the software development that allows calculating transducer’s basic parameters in providing of defined technical requirements.Предложена методика расчета одноосевого микро-опто-электро-механического (МОЭМ) преобразователя угловых скоростей, построенного на основе оптического туннельного эффекта (ОТЭ). Разработан алгоритм и программное обеспечение, позволяющие производить расчет основных параметров МОЭМ-преобразователя при обеспечении заданных технических требований

    Extrinsic Parameter Calibration for Line Scanning Cameras on Ground Vehicles with Navigation Systems Using a Calibration Pattern

    Full text link
    Line scanning cameras, which capture only a single line of pixels, have been increasingly used in ground based mobile or robotic platforms. In applications where it is advantageous to directly georeference the camera data to world coordinates, an accurate estimate of the camera's 6D pose is required. This paper focuses on the common case where a mobile platform is equipped with a rigidly mounted line scanning camera, whose pose is unknown, and a navigation system providing vehicle body pose estimates. We propose a novel method that estimates the camera's pose relative to the navigation system. The approach involves imaging and manually labelling a calibration pattern with distinctly identifiable points, triangulating these points from camera and navigation system data and reprojecting them in order to compute a likelihood, which is maximised to estimate the 6D camera pose. Additionally, a Markov Chain Monte Carlo (MCMC) algorithm is used to estimate the uncertainty of the offset. Tested on two different platforms, the method was able to estimate the pose to within 0.06 m / 1.05^{\circ} and 0.18 m / 2.39^{\circ}. We also propose several approaches to displaying and interpreting the 6D results in a human readable way.Comment: Published in MDPI Sensors, 30 October 201

    Performance Evaluation of Two Indoor Mapping Systems: Low-Cost UWB-Aided Photogrammetry and Backpack Laser Scanning

    Get PDF
    During the past dozen years, several mobile mapping systems based on the use of imaging and positioning sensors mounted on terrestrial (and aerial) vehicles have been developed. Recently, systems characterized by an increased portability have been proposed in order to enable mobile mapping in environments that are difficult to access for vehicles, in particular for indoor environments. In this work the performance of a low-cost mobile mapping system is compared with that of: (i) a state-of-the-art terrestrial laser scanning (TLS), considered as the control; (ii) a mobile mapping backpack system (Leica Pegasus), which can be considered as the state-of-the-art of commercial mobile mapping backpack systems. The aim of this paper is two-fold: first, assessing the reconstruction accuracy of the proposed low-cost mobile mapping system, based on photogrammetry and ultra-wide band (UWB) for relative positioning (and a GNSS receiver if georeferencing is needed), with respect to a TLS survey in an indoor environment, where the global navigation satellite system (GNSS) signal is not available; second, comparing such performance with that obtained with the Leica backpack. Both mobile mapping systems are designed to work without any control point, to enable an easy and quick survey (e.g., few minutes) and to be easily portable (relatively low weight and small size). The case study deals with the 3D reconstruction of a medieval bastion in Padua, Italy. Reconstruction using the Leica Pegasus backpack allowed obtaining a smaller absolute error with respect to the UWB-based photogrammetric system. In georeferenced coordinates, the root mean square (RMS) error was respectively 16.1 cm and 50.3 cm; relative error in local coordinates was more similar, respectively 8.2 cm and 6.1 cm. Given the much lower cost (approximately $6k), the proposed photogrammetric-based system can be an interesting alternative when decimetric reconstruction accuracy in georeferenced coordinates is sufficient. \ua9 2018 by the authors

    Hyperspectral Imaging from Ground Based Mobile Platforms and Applications in Precision Agriculture

    Get PDF
    This thesis focuses on the use of line scanning hyperspectral sensors on mobile ground based platforms and applying them to agricultural applications. First this work deals with the geometric and radiometric calibration and correction of acquired hyperspectral data. When operating at low altitudes, changing lighting conditions are common and inevitable, complicating the retrieval of a surface's reflectance, which is solely a function of its physical structure and chemical composition. Therefore, this thesis contributes the evaluation of an approach to compensate for changes in illumination and obtain reflectance that is less labour intensive than traditional empirical methods. Convenient field protocols are produced that only require a representative set of illumination and reflectance spectral samples. In addition, a method for determining a line scanning camera's rigid 6 degree of freedom (DOF) offset and uncertainty with respect to a navigation system is developed, enabling accurate georegistration and sensor fusion. The thesis then applies the data captured from the platform to two different agricultural applications. The first is a self-supervised weed detection framework that allows training of a per-pixel classifier using hyperspectral data without manual labelling. The experiments support the effectiveness of the framework, rivalling classifiers trained on hand labelled training data. Then the thesis demonstrates the mapping of mango maturity using hyperspectral data on an orchard wide scale using efficient image scanning techniques, which is a world first result. A novel classification, regression and mapping pipeline is proposed to generate per tree mango maturity averages. The results confirm that maturity prediction in mango orchards is possible in natural daylight using a hyperspectral camera, despite complex micro-illumination-climates under the canopy

    Pose estimation and data fusion algorithms for an autonomous mobile robot based on vision and IMU in an indoor environment

    Get PDF
    Thesis (PhD(Computer Engineering))--University of Pretoria, 2021.Autonomous mobile robots became an active research direction during the past few years, and they are emerging in different sectors such as companies, industries, hospital, institutions, agriculture and homes to improve services and daily activities. Due to technology advancement, the demand for mobile robot has increased due to the task they perform and services they render such as carrying heavy objects, monitoring, delivering of goods, search and rescue missions, performing dangerous tasks in places like underground mines. Instead of workers being exposed to hazardous chemicals or environments that could affect health and put lives at risk, humans are being replaced with mobile robot services. It is with these concerns that the enhancement of mobile robot operation is necessary, and the process is assisted through sensors. Sensors are used as instrument to collect data or information that aids the robot to navigate and localise in its environment. Each sensor type has inherent strengths and weaknesses, therefore inappropriate combination of sensors could result into high cost of sensor deployment with low performance. Regardless, the potential and prospect of autonomous mobile robot, they are yet to attain optimal performance, this is because of integral challenges they are faced with most especially localisation. Localisation is one the fundamental issues encountered in mobile robot which demands attention and the challenging part is estimating the robot position and orientation of which this information can be acquired from sensors and other relevant systems. To tackle the issue of localisation, a good technique should be proposed to deal with errors, downgrading factors, improper measurement and estimations. Different approaches are recommended in estimating the position of a mobile robot. Some studies estimated the trajectory of the mobile robot and indoor scene reconstruction using a monocular visual odmometry. This approach cannot be feasible for large zone and complex environment. Radio frequency identification (RFID) technology on the other hand provides accuracy and robustness, but the method depend on the distance between the tags, and the distance between the tags and the reader. To increase the localisation accuracy, the number of RFID tags per unit area has to be increased. Therefore, this technique may not result in economical and easily scalable solution because of the increasing number of required tags and the associated cost of their deployment. Global Positioning System (GPS) is another approach that offers proved results in most scenarios, however, indoor localization is one of the settings in which GPS cannot be used because the signal strength is not reliable inside a building. Most approaches are not able to precisely localise autonomous mobile robot even with the high cost of equipment and complex implementation. Most the devices and sensors either requires additional infrastructures or they are not suitable to be used in an indoor environment. Therefore, this study proposes using data from vision and inertial sensors which comprise 3-axis of accelerometer and 3-axis of gyroscope, also known as 6-degree of freedom (6-DOF) to determine pose estimation of mobile robot. The inertial measurement unit (IMU) based tracking provides fast response, therefore, they can be considered to assist vision whenever it fails due to loss of visual features. The use of vision sensor helps to overcome the characteristic limitation of the acoustic sensor for simultaneous multiple object tracking. With this merit, vision is capable of estimating pose with respect to the object of interest. A singular sensor or system is not reliable to estimate the pose of a mobile robot due to limitations, therefore, data acquired from sensors and sources are combined using data fusion algorithm to estimate position and orientation within specific environment. The resulting model is more accurate because it balances the strengths of the different sensors. Information provided through sensor or data fusion can be used to support more-intelligent actions. The proposed algorithms are expedient to combine data from each of the sensor types to provide the most comprehensive and accurate environmental model possible. The algorithms use a set of mathematical equations that provides an efficient computational means to estimate the state of a process. This study investigates the state estimation methods to determine the state of a desired system that is continuously changing given some observations or measurements. From the performance and evaluation of the system, it can be observed that the integration of sources of information and sensors is necessary. This thesis has provided viable solutions to the challenging problem of localisation in autonomous mobile robot through its adaptability, accuracy, robustness and effectiveness.NRFUniversity of PretoriaElectrical, Electronic and Computer EngineeringPhD(Computer Engineering)Unrestricte
    corecore