23 research outputs found

    Ship Recognition for SAR Scene Images under Imbalance Data

    No full text
    Synthetic aperture radar (SAR) ship recognition can obtain location and class information from SAR scene images, which is important in military and civilian fields, and has turned into a very important research focus recently. Limited by data conditions, the current research mainly includes two aspects: ship detection in SAR scene images and ship classification in SAR slice images. These two parts are not yet integrated, but it is necessary to integrate detection and classification in practical applications, although it will cause an imbalance of training samples for different classes. To solve these problems, this paper proposes a ship recognition method on the basis of a deep network to detect and classify ship targets in SAR scene images under imbalance data. First, RetinaNet is used as the backbone network of the method in this paper for the integration of ship detection and classification in SAR scene images. Then, taking into account the issue that there are high similarities among various SAR ship classes, the squeeze-and-excitation (SE) module is introduced for amplifying the difference features as well as reducing the similarity features. Finally, considering the problem of class imbalance in ship target recognition in SAR scene images, a loss function, the central focal loss (CEFL), based on depth feature aggregation is constructed to reduce the differences within classes. Based on the dataset from OpenSARShip and Sentinel-1, the results of the experiment suggest that the the proposed method is feasible and the accuracy of the proposed method is improved by 3.9 percentage points compared with the traditional RetinaNet

    Ship Recognition for SAR Scene Images under Imbalance Data

    No full text
    Synthetic aperture radar (SAR) ship recognition can obtain location and class information from SAR scene images, which is important in military and civilian fields, and has turned into a very important research focus recently. Limited by data conditions, the current research mainly includes two aspects: ship detection in SAR scene images and ship classification in SAR slice images. These two parts are not yet integrated, but it is necessary to integrate detection and classification in practical applications, although it will cause an imbalance of training samples for different classes. To solve these problems, this paper proposes a ship recognition method on the basis of a deep network to detect and classify ship targets in SAR scene images under imbalance data. First, RetinaNet is used as the backbone network of the method in this paper for the integration of ship detection and classification in SAR scene images. Then, taking into account the issue that there are high similarities among various SAR ship classes, the squeeze-and-excitation (SE) module is introduced for amplifying the difference features as well as reducing the similarity features. Finally, considering the problem of class imbalance in ship target recognition in SAR scene images, a loss function, the central focal loss (CEFL), based on depth feature aggregation is constructed to reduce the differences within classes. Based on the dataset from OpenSARShip and Sentinel-1, the results of the experiment suggest that the the proposed method is feasible and the accuracy of the proposed method is improved by 3.9 percentage points compared with the traditional RetinaNet

    Iterated Unscented Kalman Filter for Passive Target Tracking

    No full text

    R2FA-Det: Delving into High-Quality Rotatable Boxes for Ship Detection in SAR Images

    No full text
    Recently, convolutional neural network (CNN)-based methods have been extensively explored for ship detection in synthetic aperture radar (SAR) images due to their powerful feature representation abilities. However, there are still several obstacles hindering the development. First, ships appear in various scenarios, which makes it difficult to exclude the disruption of the cluttered background. Second, it becomes more complicated to precisely locate the targets with large aspect ratios, arbitrary orientations and dense distributions. Third, the trade-off between accurate localization and improved detection efficiency needs to be considered. To address these issues, this paper presents a rotate refined feature alignment detector (R 2 FA-Det), which ingeniously balances the quality of bounding box prediction and the high speed of the single-stage framework. Specifically, first, we devise a lightweight non-local attention module and embed it into the stem network. The recalibration of features not only strengthens the object-related features yet adequately suppresses the background interference. In addition, both forms of anchors are integrated into our modified anchor mechanism and thus can enable better representation of densely arranged targets with less computation burden. Furthermore, considering the shortcoming of the feature misalignment existing in the cascaded refinement scheme, a feature-guided alignment module which encodes both the position and shape information of current refined anchors into the feature points is adopted. Extensive experimental validations on two SAR ship datasets are performed and the results demonstrate that our algorithm has higher accuracy with faster speed than some state-of-the-art methods

    Geospatial Object Detection in Remote Sensing Imagery Based on Multiscale Single-Shot Detector with Activated Semantics

    No full text
    Geospatial object detection from high spatial resolution (HSR) remote sensing imagery is a heated and challenging problem in the field of automatic image interpretation. Despite convolutional neural networks (CNNs) having facilitated the development in this domain, the computation efficiency under real-time application and the accurate positioning on relatively small objects in HSR images are two noticeable obstacles which have largely restricted the performance of detection methods. To tackle the above issues, we first introduce semantic segmentation-aware CNN features to activate the detection feature maps from the lowest level layer. In conjunction with this segmentation branch, another module which consists of several global activation blocks is proposed to enrich the semantic information of feature maps from higher level layers. Then, these two parts are integrated and deployed into the original single shot detection framework. Finally, we use the modified multi-scale feature maps with enriched semantics and multi-task training strategy to achieve end-to-end detection with high efficiency. Extensive experiments and comprehensive evaluations on a publicly available 10-class object detection dataset have demonstrated the superiority of the presented method

    Processing Technology Based on Radar Signal Design and Classification

    No full text
    It is well known that the application of radar is becoming more and more popular with the development of the signal technology progress. This paper lists the current radar signal research, the technical progress achieved, and the existing limitations. According to radar signal respective characteristics, the design and classification of the radar signal are introduced to reflect signal’s differences and advantages. The multidisciplinary processing technology of the radar signal is classified and compared in details referring to adaptive radar signal process, pulse signal management, digital filtering signal mode, and Doppler method. The transmission process of radar signal is summarized, including the transmission steps of radar signal, the factors affecting radar signal transmission, and radar information screening. The design method of radar signal and the corresponding signal characteristics are compared in terms of performance improvement. Radar signal classification method and related influencing factors are also contrasted and narrated. Radar signal processing technology is described in detail including multidisciplinary technology synthesis. Adaptive radar signal process, pulse compression management, and digital filtering Doppler method are very effective technical means, which has its own unique advantages. At last, the future research trends and challenges of technologies of the radar signals are proposed. The conclusions obtained are beneficial to promote the further promotion applications both in theory and practice. The study work of this paper will be useful for choosing more reasonable radar signal processing technology methods

    SAR ATR Based on Convolutional Neural Network

    No full text
    This study presents a new method of Synthetic Aperture Radar (SAR) image target recognition based on a convolutional neural network. First, we introduce a class separability measure into the cost function to improve this network’s ability to distinguish between categories. Then, we extract SAR image features using the improved convolutional neural network and classify these features using a support vector machine. Experimental results using moving and stationary target acquisition and recognition SAR datasets prove the validity of this method

    LMSD-YOLO: A Lightweight YOLO Algorithm for Multi-Scale SAR Ship Detection

    No full text
    At present, deep learning has been widely used in SAR ship target detection, but the accurate and real-time detection of multi-scale targets still faces tough challenges. CNN-based SAR ship detectors are challenged to meet real-time requirements because of a large number of parameters. In this paper, we propose a lightweight, single-stage SAR ship target detection model called YOLO-based lightweight multi-scale ship detector (LMSD-YOLO), with better multi-scale adaptation capabilities. The proposed LMSD-YOLO consists of depthwise separable convolution, batch normalization and activate or not (ACON) activation function (DBA) module, Mobilenet with stem block (S-Mobilenet) backbone module, depthwise adaptively spatial feature fusion (DSASFF) neck module and SCYLLA-IoU (SIoU) loss function. Firstly, the DBA module is proposed as a general lightweight convolution unit to construct the whole lightweight model. Secondly, the improved S-Mobilenet module is designed as the backbone feature extraction network to enhance feature extraction ability without adding additional calculations. Then, the DSASFF module is proposed to achieve adaptive fusion of multi-scale features with fewer parameters. Finally, the SIoU is used as the loss function to accelerate model convergence and improve detection accuracy. The effectiveness of the LMSD-YOLO is validated on the SSDD, HRSID and GFSDD datasets, respectively, and the experimental results show that our proposed model has a smaller model volume and higher detection accuracy, and can accurately detect multi-scale targets in more complex scenes. The model volume of LMSD-YOLO is only 7.6MB (52.77% of model size of YOLOv5s), the detection speed on the NVIDIA AGX Xavier development board reached 68.3 FPS (32.7 FPS higher than YOLOv5s detector), indicating that the LMSD-YOLO can be easily deployed to the mobile platform for real-time application

    Hierarchical Superpixel Segmentation for PolSAR Images Based on the Boruvka Algorithm

    No full text
    Superpixel segmentation for polarimetric synthetic aperture radar (PolSAR) images plays a key role in remote-sensing tasks, such as ship detection and land-cover classification. However, the existing methods cannot directly generate multi-scale superpixels in a hierarchical style and they will take a long time when multi-scale segmentation is executed separately. In this article, we propose an effective and accurate hierarchical superpixel segmentation method, by introducing a minimum spanning tree (MST) algorithm called the Boruvka algorithm. To accurately measure the difference between neighboring pixels, we obtain the scattering mechanism information derived from the model-based refined 5-component decomposition (RFCD) and construct a comprehensive dissimilarity measure. In addition, the edge strength map and homogeneity measurement are considered to make use of the structural and spatial distribution information in the PolSAR image. On this basis, we can generate superpixels using the distance metric along with the MST framework. The proposed method can maintain good segmentation accuracy at multiple scales, and it generates superpixels in real time. According to the experimental results on the ESAR and AIRSAR datasets, our method is faster than the current state-of-the-art algorithms and preserves somewhat more image details in different segmentation scales
    corecore