537 research outputs found

    Progress and prospects for research on Martian topographic features and typical landform identification

    Get PDF
    The study of Martian surface topography is important for understanding the geological evolution of Mars and revealing the spatial differentiation of the Martian landscape. Identifying typical landform units is a fundamental task when studying the origin and evolution of Mars and provides important information for landing on and exploring Mars, as well as estimating the age of the Martian surface and inferring the evolution of the Earth’s environment. In this paper, we first investigate Mars exploration, data acquisition and mapping, and the classification methods of Martian landforms. Then, the identification of several typical Martian landform types, such as aeolian landforms, fluvial landforms, and impact landforms, is shown in detail. Finally, the prospects of Mars data acquisition, landform mapping, and the construction and identification of the Martian landform classification system are presented. The construction of the Martian landform classification system and the identification of typical Martian landforms using deep learning are important development directions in planetary science

    Improving Complex Knowledge Base Question Answering via Question-to-Action and Question-to-Question Alignment

    Full text link
    Complex knowledge base question answering can be achieved by converting questions into sequences of predefined actions. However, there is a significant semantic and structural gap between natural language and action sequences, which makes this conversion difficult. In this paper, we introduce an alignment-enhanced complex question answering framework, called ALCQA, which mitigates this gap through question-to-action alignment and question-to-question alignment. We train a question rewriting model to align the question and each action, and utilize a pretrained language model to implicitly align the question and KG artifacts. Moreover, considering that similar questions correspond to similar action sequences, we retrieve top-k similar question-answer pairs at the inference stage through question-to-question alignment and propose a novel reward-guided action sequence selection strategy to select from candidate action sequences. We conduct experiments on CQA and WQSP datasets, and the results show that our approach outperforms state-of-the-art methods and obtains a 9.88\% improvements in the F1 metric on CQA dataset. Our source code is available at https://github.com/TTTTTTTTy/ALCQA

    2nd Place Winning Solution for the CVPR2023 Visual Anomaly and Novelty Detection Challenge: Multimodal Prompting for Data-centric Anomaly Detection

    Full text link
    This technical report introduces the winning solution of the team Segment Any Anomaly for the CVPR2023 Visual Anomaly and Novelty Detection (VAND) challenge. Going beyond uni-modal prompt, e.g., language prompt, we present a novel framework, i.e., Segment Any Anomaly + (SAA++), for zero-shot anomaly segmentation with multi-modal prompts for the regularization of cascaded modern foundation models. Inspired by the great zero-shot generalization ability of foundation models like Segment Anything, we first explore their assembly (SAA) to leverage diverse multi-modal prior knowledge for anomaly localization. Subsequently, we further introduce multimodal prompts (SAA++) derived from domain expert knowledge and target image context to enable the non-parameter adaptation of foundation models to anomaly segmentation. The proposed SAA++ model achieves state-of-the-art performance on several anomaly segmentation benchmarks, including VisA and MVTec-AD, in the zero-shot setting. We will release the code of our winning solution for the CVPR2023 VAN.Comment: The first two author contribute equally. CVPR workshop challenge report. arXiv admin note: substantial text overlap with arXiv:2305.1072

    RT-MonoDepth: Real-time Monocular Depth Estimation on Embedded Systems

    Full text link
    Depth sensing is a crucial function of unmanned aerial vehicles and autonomous vehicles. Due to the small size and simple structure of monocular cameras, there has been a growing interest in depth estimation from a single RGB image. However, state-of-the-art monocular CNN-based depth estimation methods using fairly complex deep neural networks are too slow for real-time inference on embedded platforms. This paper addresses the problem of real-time depth estimation on embedded systems. We propose two efficient and lightweight encoder-decoder network architectures, RT-MonoDepth and RT-MonoDepth-S, to reduce computational complexity and latency. Our methodologies demonstrate that it is possible to achieve similar accuracy as prior state-of-the-art works on depth estimation at a faster inference speed. Our proposed networks, RT-MonoDepth and RT-MonoDepth-S, runs at 18.4\&30.5 FPS on NVIDIA Jetson Nano and 253.0\&364.1 FPS on NVIDIA Jetson AGX Orin on a single RGB image of resolution 640Ă—\times192, and achieve relative state-of-the-art accuracy on the KITTI dataset. To the best of the authors' knowledge, this paper achieves the best accuracy and fastest inference speed compared with existing fast monocular depth estimation methods.Comment: 8 pages, 5 figure

    In-processing User Constrained Dominant Sets for User-Oriented Fairness in Recommender Systems

    Full text link
    Recommender systems are typically biased toward a small group of users, leading to severe unfairness in recommendation performance, i.e., User-Oriented Fairness (UOF) issue. The existing research on UOF is limited and fails to deal with the root cause of the UOF issue: the learning process between advantaged and disadvantaged users is unfair. To tackle this issue, we propose an In-processing User Constrained Dominant Sets (In-UCDS) framework, which is a general framework that can be applied to any backbone recommendation model to achieve user-oriented fairness. We split In-UCDS into two stages, i.e., the UCDS modeling stage and the in-processing training stage. In the UCDS modeling stage, for each disadvantaged user, we extract a constrained dominant set (a user cluster) containing some advantaged users that are similar to it. In the in-processing training stage, we move the representations of disadvantaged users closer to their corresponding cluster by calculating a fairness loss. By combining the fairness loss with the original backbone model loss, we address the UOF issue and maintain the overall recommendation performance simultaneously. Comprehensive experiments on three real-world datasets demonstrate that In-UCDS outperforms the state-of-the-art methods, leading to a fairer model with better overall recommendation performance

    Accuracy-Complexity Tradeoff Analysis and Complexity Reduction Methods for Non-Stationary IMT-A MIMO Channel Models

    Get PDF
    open access journalHigh-mobility wireless communication systems have attracted growing interests in recent years. For the deployment of these systems, one fundamental work is to build accurate and efficient channel models. In high-mobility scenarios, it has been shown that the standardized channel models, e.g., IMT-Advanced (IMT-A) multiple-input multiple-output (MIMO) channel model, provide noticeable longer stationary intervals than measured results and the wide-sense stationary (WSS) assumption may be violated. Thus, the non-stationarity should be introduced to the IMT-A MIMO channel model to mimic the channel characteristics more accurately without losing too much efficiency. In this paper, we analyze and compare the computational complexity of the original WSS and non-stationary IMT-A MIMO channel models. Both the number of real operations and simulation time are used as complexity metrics. Since introducing the nonstationarity to the IMT-A MIMO channel model causes extra computational complexity, some computation reduction methods are proposed to simplify the non-stationary IMT-A MIMO channel model while retaining an acceptable accuracy. Statistical properties including the temporal autocorrelation function, spatial cross-correlation function, and stationary interval are chosen as the accuracy metrics for verifications. It is shown that the tradeoff between the computational complexity and modeling accuracy can be achieved by using these proposed complexity reduction methods
    • …
    corecore