2 research outputs found

    Semantic terrain segmentation in the navigation vision of planetary rovers – a systematic literature review

    Get PDF
    Background: The planetary rover is an essential platform for planetary exploration. Visual semantic segmentation is significant in the localization, perception, and path planning of the rover autonomy. Recent advances in computer vision and artificial intelligence brought about new opportunities. A systematic literature review (SLR) can help analyze existing solutions, discover available data, and identify potential gaps. Methods: A rigorous SLR has been conducted, and papers are selected from three databases (IEEE Xplore, Web of Science, and Scopus) from the start of records to May 2022. The 320 candidate studies were found by searching with keywords and bool operators, and they address the semantic terrain segmentation in the navigation vision of planetary rovers. Finally, after four rounds of screening, 30 papers were included with robust inclusion and exclusion criteria as well as quality assessment. Results: 30 studies were included for the review, and sub-research areas include navigation (16 studies), geological analysis (7 studies), exploration efficiency (10 studies), and others (3 studies) (overlaps exist). Five distributions are extendedly depicted (time, study type, geographical location, publisher, and experimental setting), which analyzes the included study from the view of community interests, development status, and reimplementation ability. One key research question and six sub-research questions are discussed to evaluate the current achievements and future gaps. Conclusions: Many promising achievements in accuracy, available data, and real-time performance have been promoted by computer vision and artificial intelligence. However, a solution that satisfies pixel-level segmentation, real-time inference time, and onboard hardware does not exist, and an open, pixel-level annotated, and the real-world data-based dataset is not found. As planetary exploration projects progress worldwide, more promising studies will be proposed, and deep learning will bring more opportunities and contributions to future studies. Contributions: This SLR identifies future gaps and challenges by proposing a methodical, replicable, and transparent survey, which is the first review (also the first SLR) for semantic terrain segmentation in the navigation vision of planetary rovers

    Sky and ground segmentation in the navigation visions of the planetary rovers

    Get PDF
    Sky and ground are two essential semantic components in computer vision, robotics, and remote sensing. The sky and ground segmentation has become increasingly popular. This research proposes a sky and ground segmentation framework for the rover navigation visions by adopting weak supervision and transfer learning technologies. A new sky and ground segmentation neural network (network in U-shaped network (NI-U-Net)) and a conservative annotation method have been proposed. The pre-trained process achieves the best results on a popular open benchmark (the Skyfinder dataset) by evaluating seven metrics compared to the state-of-the-art. These seven metrics achieve 99.232%, 99.211%, 99.221%, 99.104%, 0.0077, 0.0427, and 98.223% on accuracy, precision, recall, dice score (F1), misclassification rate (MCR), root mean squared error (RMSE), and intersection over union (IoU), respectively. The conservative annotation method achieves superior performance with limited manual intervention. The NI-U-Net can operate with 40 frames per second (FPS) to maintain the real-time property. The proposed framework successfully fills the gap between the laboratory results (with rich idea data) and the practical application (in the wild). The achievement can provide essential semantic information (sky and ground) for the rover navigation vision
    corecore