881 research outputs found

    Multiple Target Tracking with Recursive-RANSAC and Machine Learning

    Get PDF
    The Recursive-Random Sample Consensus (R-RANSAC) algorithm is a novel multiple target tracker designed to excel in tracking scenarios with high amounts of clutter measurements. R-RANSAC classifies each in-coming measurement as an inlier or an outlier; inliers are used to update existing tracks whereas are used to gen-erate new, hypothesis tracks using the standard RANSAC algorihtm. R-RANSAC is entirely autonomous in that it initiates, updates, and deletes tracks without user input. The tracking capabilities of R-RANSAC are extended by merging the algorithm with the Sequence Model (SM). The SM is a machine learner that learns sequences of identifiers. In the tracking context, the SM is used to learn sequences of target locations; in essence, it learns target trajectories and creates a probability distribution of future target locations. Simulation results demonstrate significant performance improvement when R-RANSAC is augmented with the SM, most noticeably in situation with high signal-to-noise ratio (SNR) and infrequent measurement update

    Doubly Robust Smoothing of Dynamical Processes via Outlier Sparsity Constraints

    Full text link
    Coping with outliers contaminating dynamical processes is of major importance in various applications because mismatches from nominal models are not uncommon in practice. In this context, the present paper develops novel fixed-lag and fixed-interval smoothing algorithms that are robust to outliers simultaneously present in the measurements {\it and} in the state dynamics. Outliers are handled through auxiliary unknown variables that are jointly estimated along with the state based on the least-squares criterion that is regularized with the 1\ell_1-norm of the outliers in order to effect sparsity control. The resultant iterative estimators rely on coordinate descent and the alternating direction method of multipliers, are expressed in closed form per iteration, and are provably convergent. Additional attractive features of the novel doubly robust smoother include: i) ability to handle both types of outliers; ii) universality to unknown nominal noise and outlier distributions; iii) flexibility to encompass maximum a posteriori optimal estimators with reliable performance under nominal conditions; and iv) improved performance relative to competing alternatives at comparable complexity, as corroborated via simulated tests.Comment: Submitted to IEEE Trans. on Signal Processin

    Safe2Ditch Steer-To-Clear Development and Flight Testing

    Get PDF
    This paper describes a series of small unmanned aerial system (sUAS) flights performed at NASA Langley Research Center in April and May of 2019 to test a newly added Steer-to-Clear feature for the Safe2Ditch (S2D) prototype system. S2D is an autonomous crash management system for sUAS. Its function is to detect the onset of an emergency for an autonomous vehicle, and to enable that vehicle in distress to execute safe landings to avoid injuring people on the ground or damaging property. Flight tests were conducted at the City Environment Range for Testing Autonomous Integrated Navigation (CERTAIN) range at NASA Langley. Prior testing of S2D focused on rerouting to an alternate ditch site when an occupant was detected in the primary ditch site. For Steer-to-Clear testing, S2D was limited to a single ditch site option to force engagement of the Steer-to-Clear mode. The implementation of Steer-to-Clear for the flight prototype used a simple method to divide the target ditch site into four quadrants. An RC car was driven in circles in one quadrant to simulate an occupant in that ditch site. A simple implementation of Steer-to- Clear was programmed to land in the opposite quadrant to maximize distance to the occupants quadrant. A successful mission was tallied when this occurred. Out of nineteen flights, thirteen resulted in successful missions. Data logs from the flight vehicle and the RC car indicated that unsuccessful missions were due to geolocation error between the actual location of the RC car and the derived location of it by the Vision Assisted Landing component of S2D on the flight vehicle. Video data indicated that while the Vision Assisted Landing component reliably identified the location of the ditch site occupant in the image frame, the conversion of the occupants location to earth coordinates was sometimes adversely impacted by errors in sensor data needed to perform the transformation. Logged sensor data was analyzed to attempt to identify the primary error sources and their impact on the geolocation accuracy. Three trends were observed in the data evaluation phase. In one trend, errors in geolocation were relatively large at the flight vehicles cruise altitude, but reduced as the vehicle descended. This was the expected behavior and was attributed to sensor errors of the inertial measurement unit (IMU). The second trend showed distinct sinusoidal error for the entire descent that did not always reduce with altitude. The third trend showed high scatter in the data, which did not correlate well with altitude. Possible sources of observed error and compensation techniques are discussed

    Robust automatic target tracking based on a Bayesian ego-motion compensation framework for airborne FLIR imagery

    Get PDF
    Automatic target tracking in airborne FLIR imagery is currently a challenge due to the camera ego-motion. This phenomenon distorts the spatio-temporal correlation of the video sequence, which dramatically reduces the tracking performance. Several works address this problem using ego-motion compensation strategies. They use a deterministic approach to compensate the camera motion assuming a specific model of geometric transformation. However, in real sequences a specific geometric transformation can not accurately describe the camera ego-motion for the whole sequence, and as consequence of this, the performance of the tracking stage can significantly decrease, even completely fail. The optimum transformation for each pair of consecutive frames depends on the relative depth of the elements that compose the scene, and their degree of texturization. In this work, a novel Particle Filter framework is proposed to efficiently manage several hypothesis of geometric transformations: Euclidean, affine, and projective. Each type of transformation is used to compute candidate locations of the object in the current frame. Then, each candidate is evaluated by the measurement model of the Particle Filter using the appearance information. This approach is able to adapt to different camera ego-motion conditions, and thus to satisfactorily perform the tracking. The proposed strategy has been tested on the AMCOM FLIR dataset, showing a high efficiency in the tracking of different types of targets in real working conditions

    RANSAC for Robotic Applications: A Survey

    Get PDF
    Random Sample Consensus, most commonly abbreviated as RANSAC, is a robust estimation method for the parameters of a model contaminated by a sizable percentage of outliers. In its simplest form, the process starts with a sampling of the minimum data needed to perform an estimation, followed by an evaluation of its adequacy, and further repetitions of this process until some stopping criterion is met. Multiple variants have been proposed in which this workflow is modified, typically tweaking one or several of these steps for improvements in computing time or the quality of the estimation of the parameters. RANSAC is widely applied in the field of robotics, for example, for finding geometric shapes (planes, cylinders, spheres, etc.) in cloud points or for estimating the best transformation between different camera views. In this paper, we present a review of the current state of the art of RANSAC family methods with a special interest in applications in robotics.This work has been partially funded by the Basque Government, Spain, under Research Teams Grant number IT1427-22 and under ELKARTEK LANVERSO Grant number KK-2022/00065; the Spanish Ministry of Science (MCIU), the State Research Agency (AEI), the European Regional Development Fund (FEDER), under Grant number PID2021-122402OB-C21 (MCIU/AEI/FEDER, UE); and the Spanish Ministry of Science, Innovation and Universities, under Grant FPU18/04737

    TRADE: Object Tracking with 3D Trajectory and Ground Depth Estimates for UAVs

    Full text link
    We propose TRADE for robust tracking and 3D localization of a moving target in cluttered environments, from UAVs equipped with a single camera. Ultimately TRADE enables 3d-aware target following. Tracking-by-detection approaches are vulnerable to target switching, especially between similar objects. Thus, TRADE predicts and incorporates the target 3D trajectory to select the right target from the tracker's response map. Unlike static environments, depth estimation of a moving target from a single camera is a ill-posed problem. Therefore we propose a novel 3D localization method for ground targets on complex terrain. It reasons about scene geometry by combining ground plane segmentation, depth-from-motion and single-image depth estimation. The benefits of using TRADE are demonstrated as tracking robustness and depth accuracy on several dynamic scenes simulated in this work. Additionally, we demonstrate autonomous target following using a thermal camera by running TRADE on a quadcopter's board computer

    Multiple-target tracking using spectropolarimetric imagery

    Get PDF
    Detection and tracking methods are two hot research topics in the field of multiple target tracking. Often change detection and motion tracking are used to detect and track moving vehicles, but in this thesis new approaches are provided to improve these two aspects. In the detection aspect, a combined detection method is presented to improve target detection techniques. The method of combining RX (Reed-Xiaoli) with change detection has demonstrated good performance in highly cluttered, dynamic ground-based scenes. In the tracking aspect, Kalman filter and Global Nearest Neighbor are applied in motion tracking to predict the location and implement data association respectively. Spectral features are extracted for each vehicle to solve the limitation of motion tracking through feature matching. The Bhattacharyya distance is used as a criterion in the feature matching procedure. Our algorithm has been tested using three sets data. One is a set of multispectral polarimetric imagery acquired by the Multispectral Aerial Passive Polarimeter System (MAPPS). Another two data sets are spectropolarimetric imagery generated by the Digital Imaging and Remote Sensing Image Generation tool. The tracking performance is analyzed by calculating performance metrics: track purity and (Multiple Object Tracking Accuracy ) MOTA. For MAPPS data, the average MOTA and track purity of feature-aided tracking increase 1 percent and 9 percent over those of motion-only tracking respectively. For DIRSIG data with trees, the average track purity of feature-aided tracking in without noise case increases 2 percent over that of motion-only tracking. In this work, we have demonstrated the capability of detection and tracking methods applied in a complex environment
    corecore