3,483 research outputs found

    Data Augmentation and Classification of Sea-Land Clutter for Over-the-Horizon Radar Using AC-VAEGAN

    Full text link
    In the sea-land clutter classification of sky-wave over-the-horizon-radar (OTHR), the imbalanced and scarce data leads to a poor performance of the deep learning-based classification model. To solve this problem, this paper proposes an improved auxiliary classifier generative adversarial network~(AC-GAN) architecture, namely auxiliary classifier variational autoencoder generative adversarial network (AC-VAEGAN). AC-VAEGAN can synthesize higher quality sea-land clutter samples than AC-GAN and serve as an effective tool for data augmentation. Specifically, a 1-dimensional convolutional AC-VAEGAN architecture is designed to synthesize sea-land clutter samples. Additionally, an evaluation method combining both traditional evaluation of GAN domain and statistical evaluation of signal domain is proposed to evaluate the quality of synthetic samples. Using a dataset of OTHR sea-land clutter, both the quality of the synthetic samples and the performance of data augmentation of AC-VAEGAN are verified. Further, the effect of AC-VAEGAN as a data augmentation method on the classification performance of imbalanced and scarce sea-land clutter samples is validated. The experiment results show that the quality of samples synthesized by AC-VAEGAN is better than that of AC-GAN, and the data augmentation method with AC-VAEGAN is able to improve the classification performance in the case of imbalanced and scarce sea-land clutter samples.Comment: 13 pages, 16 figure

    Application of advanced technology to space automation

    Get PDF
    Automated operations in space provide the key to optimized mission design and data acquisition at minimum cost for the future. The results of this study strongly accentuate this statement and should provide further incentive for immediate development of specific automtion technology as defined herein. Essential automation technology requirements were identified for future programs. The study was undertaken to address the future role of automation in the space program, the potential benefits to be derived, and the technology efforts that should be directed toward obtaining these benefits

    Comparison of forest attributes derived from two terrestrial lidar systems.

    Get PDF
    Abstract Terrestrial lidar (TLS) is an emerging technology for deriving forest attributes, including conventional inventory and canopy characterizations. However, little is known about the influence of scanner specifications on derived forest parameters. We compared two TLS systems at two sites in British Columbia. Common scanning benchmarks and identical algorithms were used to obtain estimates of tree diameter, position, and canopy characteristics. Visualization of range images and point clouds showed clear differences, even though both scanners were relatively high-resolution instruments. These translated into quantifiable differences in impulse penetration, characterization of stems and crowns far from the scan location, and gap fraction. Differences between scanners in estimates of effective plant area index were greater than differences between sites. Both scanners provided a detailed digital model of forest structure, and gross structural characterizations (including crown dimensions and position) were relatively robust; but comparison of canopy density metrics may require consideration of scanner attributes

    Satellite remote sensing facility for oceanograhic applications

    Get PDF
    The project organization, design process, and construction of a Remote Sensing Facility at Scripps Institution of Oceanography at LaJolla, California are described. The facility is capable of receiving, processing, and displaying oceanographic data received from satellites. Data are primarily imaging data representing the multispectral ocean emissions and reflectances, and are accumulated during 8 to 10 minute satellite passes over the California coast. The most important feature of the facility is the reception and processing of satellite data in real time, allowing investigators to direct ships to areas of interest for on-site verifications and experiments

    Obstacle and Change Detection Using Monocular Vision

    Get PDF
    We explore change detection using videos of change-free paths to detect any changes that occur while travelling the same paths in the future. This approach benefits from learning the background model of the given path as preprocessing, detecting changes starting from the first frame, and determining the current location in the path. Two approaches are explored: a geometry-based approach and a deep learning approach. In our geometry-based approach, we use feature points to match testing frames to training frames. Matched frames are used to determine the current location within the training video. The frames are then processed by first registering the test frame onto the training frame through a homography of the previously matched feature points. Finally, a comparison is made to determine changes by using a region of interest (ROI) of the direct path of the robot in both frames. This approach performs well in many tests with various floor patterns, textures and complexities in the background of the path. In our deep learning approach, we use an ensemble of unsupervised dimensionality reduction models. We first extract feature points within a ROI and extract small frame samples around the feature points. The frame samples are used as training inputs and labels for our unsupervised models. The approach aims at learning a compressed feature representation of the frame samples in order to have a compact representation of background. We use the distribution of the training samples to directly compare the learned background to test samples with a classification of background or change using a majority vote. This approach performs well using just two models in the ensemble and achieves an overall accuracy of 98.0% with a 4.1% improvement over the geometry-based approach

    Fusion of Data from Heterogeneous Sensors with Distributed Fields of View and Situation Evaluation for Advanced Driver Assistance Systems

    Get PDF
    In order to develop a driver assistance system for pedestrian protection, pedestrians in the environment of a truck are detected by radars and a camera and are tracked across distributed fields of view using a Joint Integrated Probabilistic Data Association filter. A robust approach for prediction of the system vehicles trajectory is presented. It serves the computation of a probabilistic collision risk based on reachable sets where different sources of uncertainty are taken into account

    Feature-based object tracking in maritime scenes.

    Get PDF
    A monitoring of presence, location and activity of various objects on the sea is essential for maritime navigation and collision avoidance. Mariners normally rely on two complementary methods of the monitoring: radar and satellite-based aids and human observation. Though radar aids are relatively accurate at long distances, their capability of detecting small, unmanned or non-metallic craft that generally do not reflect radar waves sufficiently enough, is limited. The mariners, therefore, rely in such cases on visual observations. The visual observation is often facilitated by using cameras overlooking the sea that can also provide intensified infra-red images. These systems or nevertheless merely enhance the image and the burden of the tedious and error-prone monitoring task still rests with the operator. This thesis addresses the drawbacks of both methods by presenting a framework consisting of a set of machine vision algorithms that facilitate the monitoring tasks in maritime environment. The framework detects and tracks objects in a sequence of images captured by a camera mounted either on a board of a vessel or on a static platform over-looking the sea. The detection of objects is independent of their appearance and conditions such as weather and time of the day. The output of the framework consists of locations and motions of all detected objects with respect to a fixed point in the scene. All values are estimated in real-world units, i. e. location is expressed in metres and velocity in knots. The consistency of the estimates is maintained by compensating for spurious effects such as vibration of the camera. In addition, the framework continuously checks for predefined events such as collision threats or area intrusions, raising an alarm when any such event occurs. The development and evaluation of the framework is based on sequences captured under conditions corresponding to a designated application. The independence of the detection and tracking on the appearance of the sceneand objects is confirmed by a final cross-validation of the framework on previously unused sequences. Potential applications of the framework in various areas of maritime environment including navigation, security, surveillance and others are outlined. Limitations to the presented framework are identified and possible solutions suggested. The thesis concludes with suggestions to further directions of the research presented

    TractorEYE: Vision-based Real-time Detection for Autonomous Vehicles in Agriculture

    Get PDF
    Agricultural vehicles such as tractors and harvesters have for decades been able to navigate automatically and more efficiently using commercially available products such as auto-steering and tractor-guidance systems. However, a human operator is still required inside the vehicle to ensure the safety of vehicle and especially surroundings such as humans and animals. To get fully autonomous vehicles certified for farming, computer vision algorithms and sensor technologies must detect obstacles with equivalent or better than human-level performance. Furthermore, detections must run in real-time to allow vehicles to actuate and avoid collision.This thesis proposes a detection system (TractorEYE), a dataset (FieldSAFE), and procedures to fuse information from multiple sensor technologies to improve detection of obstacles and to generate a map. TractorEYE is a multi-sensor detection system for autonomous vehicles in agriculture. The multi-sensor system consists of three hardware synchronized and registered sensors (stereo camera, thermal camera and multi-beam lidar) mounted on/in a ruggedized and water-resistant casing. Algorithms have been developed to run a total of six detection algorithms (four for rgb camera, one for thermal camera and one for a Multi-beam lidar) and fuse detection information in a common format using either 3D positions or Inverse Sensor Models. A GPU powered computational platform is able to run detection algorithms online. For the rgb camera, a deep learning algorithm is proposed DeepAnomaly to perform real-time anomaly detection of distant, heavy occluded and unknown obstacles in agriculture. DeepAnomaly is -- compared to a state-of-the-art object detector Faster R-CNN -- for an agricultural use-case able to detect humans better and at longer ranges (45-90m) using a smaller memory footprint and 7.3-times faster processing. Low memory footprint and fast processing makes DeepAnomaly suitable for real-time applications running on an embedded GPU. FieldSAFE is a multi-modal dataset for detection of static and moving obstacles in agriculture. The dataset includes synchronized recordings from a rgb camera, stereo camera, thermal camera, 360-degree camera, lidar and radar. Precise localization and pose is provided using IMU and GPS. Ground truth of static and moving obstacles (humans, mannequin dolls, barrels, buildings, vehicles, and vegetation) are available as an annotated orthophoto and GPS coordinates for moving obstacles. Detection information from multiple detection algorithms and sensors are fused into a map using Inverse Sensor Models and occupancy grid maps. This thesis presented many scientific contribution and state-of-the-art within perception for autonomous tractors; this includes a dataset, sensor platform, detection algorithms and procedures to perform multi-sensor fusion. Furthermore, important engineering contributions to autonomous farming vehicles are presented such as easily applicable, open-source software packages and algorithms that have been demonstrated in an end-to-end real-time detection system. The contributions of this thesis have demonstrated, addressed and solved critical issues to utilize camera-based perception systems that are essential to make autonomous vehicles in agriculture a reality
    • …
    corecore