2 research outputs found

    Feature Learning for Multispectral Satellite Imagery Classification Using Neural Architecture Search

    Get PDF
    Automated classification of remote sensing data is an integral tool for earth scientists, and deep learning has proven very successful at solving such problems. However, building deep learning models to process the data requires expert knowledge of machine learning. We introduce DELTA, a software toolkit to bridge this technical gap and make deep learning easily accessible to earth scientists. Visual feature engineering is a critical part of the machine learning lifecycle, and hence is a key area that will be automated by DELTA. Hand-engineered features can perform well, but require a cross functional team with expertise in both machine learning and the specific problem domain, which is costly in both researcher time and labor. The problem is more acute with multispectral satellite imagery, which requires considerable computational resources to process. In order to automate the feature learning process, a neural architecture search samples the space of asymmetric and symmetric autoencoders using evolutionary algorithms. Since denoising autoencoders have been shown to perform well for feature learning, the autoencoders are trained on various levels of noise and the features generated by the best performing autoencoders evaluated according to their performance on image classification tasks. The resulting features are demonstrated to be effective for Landsat-8 flood mapping, as well as benchmark datasets CIFAR10 and SVHN

    Planetary Rover Simulation for Lunar Exploration Missions

    Get PDF
    When planning planetary rover missions it is useful to develop intuition and skills driving in, quite literally, alien environments before incurring the cost of reaching said locales. Simulators make it possible to operate in environments that have the physical characteristics of target locations without the expense and overhead of extensive physical tests. To that end, NASA Ames and Open Robotics collaborated on a Lunar rover driving simulator based on the open source Gazebo simulation platform and leveraging ROS (Robotic Operating System) components. The simulator was integrated with research and mission software for rover driving, system monitoring, and science instrument simulation to constitute an end-to-end Lunar mission simulation capability. Although we expect our simulator to be applicable to arbitrary Lunar regions, we designed to a reference mission of prospecting in polar regions. The harsh lighting and low illumination angles at the Lunar poles combine with the unique reflectance properties of Lunar regolith to present a challenging visual environment for both human and computer perception. Our simulator placed an emphasis on high fidelity visual simulation in order to produce synthetic imagery suitable for evaluating human rover drivers with navigation tasks, as well as providing test data for computer vision software development.In this paper, we describe the software used to construct the simulated Lunar environment and the components of the driving simulation. Our synthetic terrain generation software artificially increases the resolution of Lunar digital elevation maps by fractal synthesis and inserts craters and rocks based on Lunar size-frequency distribution models. We describe the necessary enhancements to import large scale, high resolution terrains into Gazebo, as well as our approach to modeling the visual environment of the Lunar surface. An overview of the mission software system is provided, along with how ROS was used to emulate flight software components that had not been developed yet. Finally, we discuss the effect of using the high-fidelity synthetic Lunar images for visual odometry. We also characterize the wheel slip model, and find some inconsistencies in the produced wheel slip behaviour
    corecore