92 research outputs found

    AN EFFICIENT WEED DETECTION PROCEDURE USING LOW-COST UAV IMAGERY SYSTEM FOR PRECISION AGRICULTURE APPLICATIONS

    Get PDF
    The use of Unmanned Aerial Vehicle (UAV) imagery systems for Precision Agriculture (PA) applications drew a lot of attention through the last decade. UAV as a platform for an imagery sensor is providing a major advantage as it can provide high spatial resolution images compared to satellite platform. Moreover, it provides the user with the ability to collect the needed images at any time along with the ability to cover the agriculture fields faster than terrestrial platform. Therefore, such UAV imagery systems are capable to fit the gap between aerial and terrestrial Remote Sensing. One of the important PA applications that using UAV imagery system for it showed great potentials is weed management and more specifically the weed detection step. The current weed management procedure depends on spraying the whole agriculture field with chemical herbicides to execute any weed plants in the field. Although such procedure seems to be effective, it has huge effect on the surrounding environment due to the excessive use of the chemical, especially that weed plants don’t cover the whole field. Usually weed plants spread through only few spots of the field. Therefore, different efforts were introduced to develop weed detection techniques using UAV imagery systems. Though the different advantages of the UAV imagery systems, they systems didn’t draw the users interest due to many limitations including the cost of the system. Therefore, the proposed paper introduces a new weed detection methodology from RGB images acquired by low-cost UAV imagery system. The proposed methodology adopts detecting the high-density vegetation spots as indication for weed patches spots. The achieved results showed the potential of the proposed methodology to use low-cost UAV imagery system equipped with low-cost RGB imagery sensor for detecting weed patches in different cropped agriculture fields even from different flight height as 20, 40, 80, and 120 meters

    EVALUATION OF DYNAMIC AD-HOC UWB INDOOR POSITIONING SYSTEM

    Get PDF
    Ultra-wideband (UWB) technology has witnessed tremendous development and advancement in the past few years. Currently available UWB transceivers can provide high-precision time-of-flight measurements which corresponds to range measurements with theoretical accuracy of few centimetres. Position estimation using range measurement is determined by measuring the ranges from a rover or a dynamic node, to a set of anchor points with known positions. However, building a flexible and accurate indoor positioning system requires more than just accurate range measurements. The performance of indoor positioning system is affected by the number and the configuration of the anchor points used, along with the accuracy of the anchor positions.This paper introduces LocSpeck, a dynamic ad-hoc positioning system based on the DW1000 UWB transceiver from Decawave. LocSpeck is composed of a set of identical nodes communicating on a common RF channel, forming a fully or partially connected network where the positioning algorithm run on each node. Each LocSpeck node could act as an anchor or a rover, and the role could change dynamically during the same session. The number of nodes in the network could change dynamically, since the firmware of LocSpeck supports adding and removing nodes on-the-fly. The paper compares the performance of the LocSpeck system with commercially available off-the-shelf UWB positioning system. Different operating scenarios are considered when evaluating the performance of the system, including cases where collaboration between the two systems is considered.</p

    VANISHING POINT AIDED LANE DETECTION USING A MULTI-SENSOR SYSTEM

    Get PDF
    Lane Detection is a critical component of an autonomous driving system that can be integrated alongside with High-definition (HD) map to improve accuracy and reliability of the system. Typically, lane detection is achieved using computer vision algorithms such as edge detection and Hough transform, deep learning-based algorithms, or motion-based algorithms to detect and track the lanes on the road. However, these approaches can contain incorrectly detected line segments with outliers. To address these issues, we proposed a vanishing point aided lane detection method that utilizes both camera and LiDAR sensors, and then employs a RANSAC-based post-processing method to remove potential outliers to improve the accuracy of the detected lanes. We evaluated this method on four datasets provided from the KITTI Benchmark Suite and achieved a total precision of 87%

    KINEMATIC CALIBRATION USING LOW-COST LiDAR SYSTEM FOR MAPPING AND AUTONOMOUS DRIVING APPLICATIONS

    Get PDF
    More recently, mapping sensors for land-based Mobile Mapping Systems (MMSs) have combined cameras and laser scanning measurements defined as Light Detection and Ranging (LiDAR), or laser scanner together. These mobile laser scanning systems (MLS) can be used in dynamic environments and are able of being adopted in traffic-related applications, such as the collection of road network databases, inventory of traffic sign and surface conditions, etc. However, most LiDAR systems are expensive and not easy to access. Moreover, due to the increasing demand of the autonomous driving system, the low-cost LiDAR systems, such as Velodyne or SICK, have become more and more popular these days. These kinds of systems do not provide the total solution. Users need to integrate with Inertial Navigation System/ Global Navigation Satellite System (INS/GNSS) or camera by themselves to meet their requirement. The transformation between LiDAR and INS frames must be carefully computed ahead of conducting direct geo-referencing. To solve these issues, this research proposes the kinematic calibration model for a land-based INS/GNSS/LiDAR system. The calibration model is derived from the direct geo-referencing model and based on the conditioning of target points where lie on planar surfaces. The calibration parameters include the boresight and lever arm as well as the plane coefficients. The proposed calibration model takes into account the plane coefficients, laser and INS/GNSS observations, and boresight and lever arm. The fundamental idea is the constraint where geo-referenced point clouds should lie on the same plane through different directions during the calibration. After the calibration process, there are two evaluations using the calibration parameters to enhance the performance of proposed applications. The first evaluation focuses on the direct geo-referencing. We compared the target planes composed of geo- referenced points before and after the calibration. The second evaluation concentrates on positioning improvement after taking aiding measurements from LiDAR- Simultaneously Localization and Mapping (SLAM) into INS/GNSS. It is worth mentioning that only one or two planes need to be adopted during the calibration process and there is no extra arrangement to set up the calibration field. The only requirement for calibration is the open sky area with the clear plane construction, such as wall or building. Not only has the contribution in MMSs or mapping, this research also considers the self-driving applications which improves the positioning ability and stability

    ULTRASONIC BASED HEADING ESTIMATION FOR AIDING LAND VEHICLE NAVIGATION IN GNSS DENIED ENVIRONMENT

    Get PDF
    This paper introduces a novel approach for land vehicles navigation in GNSS denied environment by aiding the Inertial Navigation System (INS) with a very low-cost ultrasonic sensor using Extended Kalman Filter (EKF) to bound its drift during GNSS blockage through a heading change update to enhance the navigation estimation.The ultrasonic sensor is mounted on the body of the car facing the direction of the car motion and behind the front right wheel, a wooden surface is mounted on the car body on the other side of this wheel with a constant distance between the sensor and this surface. The ultrasonic sensor measures this range as long as the car moving straight. When orientation changes, the ultrasonic sensor senses the range to the front right wheel. The relation between the range and the estimated GNSS/INS change of heading during GNSS availability is estimated through a linear regression model. During GNSS signal outage, the ultrasonic sensor provides heading change update to the INS standalone navigation solution.Experimental road tests were performed, and the results show that the navigation states estimation using the proposed aiding is improved compared with INS standalone navigation solution during GNSS signal outage. For multiple GNSS outages of 60&thinsp;seconds, the inclusion of the proposed update reduced the position RMSE to around 80&thinsp;% of its value when using the non-holonomic constraints and velocity update only.</p

    Progress on isprs benchmark on multisensory indoor mapping and positioning

    Get PDF
    This paper presents the design of the benchmark dataset on multisensory indoor mapping and position (MIMAP) which is sponsored by ISPRS scientific initiatives. The benchmark dataset including point clouds captured by indoor mobile laser scanning system (IMLS) in indoor environments of various complexity. The benchmark aims to stimulate and promote research in the following three fields: (1) SLAM-based indoor point cloud generation; (2) automated BIM feature extraction from point clouds, with an emphasis on theelements, such as floors, walls, ceilings, doors, windows, stairs, lamps, switches, air outlets, that are involved in building managementand navigation tasks ; and (3) low-cost multisensory indoor positioning, focusing on the smartphone platform solution. MIMAP provides a common framework for the evaluation and comparison of LiDAR-based SLAM, BIM feature extraction, and smartphoneindoor positioning methods

    ENHANCEMENT OF REAL-TIME SCAN MATCHING FOR UAV INDOOR NAVIGATION USING VEHICLE MODEL

    Get PDF
    Autonomous Unmanned Aerial Vehicles (UAVs) have drawn great attention from different organizations, because of the various applications that save time, cost, effort, and human lives. The navigation of autonomous UAV mainly depends on the fusion between Global Navigation Satellite System (GNSS) and Inertial Measurement System (IMU). Navigation in indoor environments is a challenging task, because of the GNSS signal unavailability, especially when the utilized IMU is low-cost. Light Detection and Ranging Radar (LIDAR) is one of the mainly utilized sensors in the indoor environment for localization through scan matching of successive scans. The process of calculating the rotation and translation from successive scans can employ different approaches, such as Iterative Closest Point (ICP) with its variants, and Hector SLAM. ICP and Hector SLAM iterative fashion can greatly increase the matching time, and the convergence is not guaranteed in case of harsh maneuvers, moving objects, and short-range LIDAR as it may get stuck in local minima. This paper proposes enhanced real-time ICP and Hector SLAM algorithms based on vehicle model (VM) during sharp maneuvers. The vehicle model serves as initialization step (coarse alignment) then the ICP/Hector serve as fine alignment step. Test cases of quadcopter flight with harsh maneuvers were carried out with LIDAR to evaluate the proposed approach to enhance the ICP/Hector convergence time and accuracy. The proposed algorithm is convenient for UAVs where there are limitations regarding the size, weight, and power limitations, as it is a stand-alone algorithm that does not require any additional sensors

    HYBRID DEEP LEARNING APPROACH FOR VEHICLE’S RELATIVE ATTITUDE ESTIMATION USING MONOCULAR CAMERA

    Get PDF
    Relative pose estimation using a monocular camera is one of the most common approaches for aiding vehicle&rsquo;s navigation. It involves determining the position and orientation of a vehicle relative to its surroundings using only a single camera. This can be achieved through four main steps: feature detection and matching, motion estimation, filtering and optimization, and scale estimation. Feature tracking involves detecting and tracking distinctive visual features in the environment, such as corners or edges, and using their relative motion to estimate the camera's movement. This approach can be prone to errors due to feature detection and tracking difficulties, as well as issues with moving objects, occlusions, and changes in lighting conditions. These typical computer vision approaches are computationally intensive and may require significant processing power as well, which limits their real time application. This paper proposes a hybrid deep neural network approach for estimating the relative attitude of a vehicle using a monocular camera to aid in vehicle navigation. The proposed neural network adopts a relatively shallow architecture to minimize the computational cost and to meet the real-time requirements of low-cost processing systems. The network is trained using the KITTI dataset and can estimate the relative attitude of the vehicle with a RMSE of relative orientation of 0.017 degrees per frame. The processing time of the proposed approach is around 28 ms per frame including both the tracking and network prediction steps, which is significantly faster than the typical estimation pipelines. The results show that the proposed approach is a viable alternative to conventional computer vision methods and can significantly reduce computational costs, deal with the confusing scenarios of the moving objects while maintaining a good accuracy in estimating ego-motion

    DEEP LEARNING FOR OBJECT DETECTION USING RADAR DATA

    Get PDF
    Recently, Deep learning algorithms are becoming increasingly instrumental in autonomous driving by identifying and acknowledging road entities to ensure secure navigation and decision-making. Autonomous car datasets play a vital role in developing and evaluating perception systems. Nevertheless, the majority of current datasets are acquired using Light Detection and Ranging (LiDAR) and camera sensors. Utilizing deep neural networks yields remarkable outcomes in object recognition, especially when applied to analyze data from cameras and LiDAR sensors which perform poorly under adverse weather conditions such as rain, fog, and snow due to the sensor wavelengths. This paper aims to evaluate the ability to use RADAR dataset for detecting objects in adverse weather conditions, when LiDAR and Cameras may fail to be effective. This paper presents two experiments for object detection using Faster-RCNN architecture with Resnet-50 backbone and COCO evaluation metrics. Experiment 1 is object detection over only one class, while Experiment 2 is object detection over eight classes. The results show that as expected the average precision (AP) of detecting one class is (47.2) which is better than the results from detecting eight classes (27.4). Comparing my results from experiment 1 to the literature results which achieved an overall AP (45.77), my result was slightly better in accuracy than the literature mainly due to hyper-parameters optimization. The outcomes of object detection and recognition based on RADAR indicate the potential effectiveness of RADAR data in automotive applications particularly in adverse weather conditions, where vision and LiDAR may encounter limitations
    • …
    corecore