10 research outputs found

    Joint optimization of depth and ego-motion for intelligent autonomous vehicles

    Get PDF
    The three-dimensional (3D) perception of autonomous vehicles is crucial for localization and analysis of the driving environment, while it involves massive computing resources for deep learning, which can't be provided by vehicle-mounted devices. This requires the use of seamless, reliable, and efficient massive connections provided by the 6G network for computing in the cloud. In this paper, we propose a novel deep learning framework with 6G enabled transport system for joint optimization of depth and ego-motion estimation, which is an important task in 3D perception for autonomous driving. A novel loss based on feature map and quadtree is proposed, which uses feature value loss with quadtree coding instead of photometric loss to merge the feature information at the texture-less region. Besides, we also propose a novel multi-level V-shaped residual network to estimate the depths of the image, which combines the advantages of V-shaped network and residual network, and solves the problem of poor feature extraction results that may be caused by the simple fusion of low-level and high-level features. Lastly, to alleviate the influence of image noise on pose estimation, we propose a number of parallel sub-networks that use RGB image and its feature map as the input of the network. Experimental results show that our method significantly improves the quality of the depth map and the localization accuracy and achieves the state-of-the-art performance

    Robust Ship Detection in Infrared Images through Multiscale Feature Extraction and Lightweight CNN

    No full text
    The sophistication of ship detection technology in remote sensing images is insufficient, the detection results differ substantially from the practical requirements, mainly reflected in the inadequate support for the differentiated application of multi-scene, multi-resolution and multi-type target ships. To overcome these challenges, a ship detection method based on multiscale feature extraction and lightweight CNN is proposed. Firstly, the candidate-region extraction method, based on a multiscale model, can cover the potential targets under different backgrounds accurately. Secondly, the multiple feature fusion method is employed to achieve ship classification, in which, Fourier global spectrum features are applied to discriminate between targets and simple interference, and the targets in complex interference scenarios are further distinguished by using lightweight CNN. Thirdly, the cascade classifier training algorithm and an improved non-maximum suppression method are used to minimise the classification error rate and maximise generalisation, which can achieve final-target confirmation. Experimental results validate our method, showing that it significantly outperforms the available alternatives, reducing the model size by up to 2.17 times while improving detection performance be improved by up to 5.5% in multi-interference scenarios. Furthermore, the robustness ability was verified by three indicators, among which the F-measure score and true–false-positive rate can increase by up to 5.8% and 4.7% respectively, while the mean error rate can decrease by up to 38.2%

    Research and Appalication of Software Defect Predictionn based on BP-Migration learning

    No full text
    Software Defect Prediction has been an important part of Software engineering research since the 1970s. This technique is used to calculate and analyze the measurement and defect information of the historical software module to complete the defect prediction of the new software module. Currently, most software defect prediction model is established on the basis of the same software project data set. The training date sets used to construct the model and the test data sets used to validate the model are from the same software projects. But in practice, for those has less historical data of a software project or new projects, the defect of traditional prediction method shows lower forecast performance. For the traditional method, when the historical data is insufficient, the software defect prediction model cannot be fully studied. It is difficult to achieve high prediction accuracy. In the process of cross-project prediction, the problem that we will faced is data distribution differences. For the above problems, this paper presents a software defect prediction model based on migration learning and traditional software defect prediction model. This model uses the existing project data sets to predict software defects across projects. The main work of this article includes: 1) Data preprocessing. This section includes data feature correlation analysis, noise reduction and so on, which effectively avoids the interference of over-fitting problem and noise data on prediction results. 2) Migrate learning. This section analyzes two different but related project data sets and reduces the impact of data distribution differences. 3) Artificial neural networks. According to class imbalance problems of the data set, using artificial neural network and dynamic selection training samples reduce the influence of prediction results because of the positive and negative samples data. The data set of the Relink project and AEEEM is studied to evaluate the performance of the f-measure and the ROC curve and AUC calculation. Experiments show that the model has high predictive performance

    Research and Appalication of Software Defect Predictionn based on BP-Migration learning

    No full text
    Software Defect Prediction has been an important part of Software engineering research since the 1970s. This technique is used to calculate and analyze the measurement and defect information of the historical software module to complete the defect prediction of the new software module. Currently, most software defect prediction model is established on the basis of the same software project data set. The training date sets used to construct the model and the test data sets used to validate the model are from the same software projects. But in practice, for those has less historical data of a software project or new projects, the defect of traditional prediction method shows lower forecast performance. For the traditional method, when the historical data is insufficient, the software defect prediction model cannot be fully studied. It is difficult to achieve high prediction accuracy. In the process of cross-project prediction, the problem that we will faced is data distribution differences. For the above problems, this paper presents a software defect prediction model based on migration learning and traditional software defect prediction model. This model uses the existing project data sets to predict software defects across projects. The main work of this article includes: 1) Data preprocessing. This section includes data feature correlation analysis, noise reduction and so on, which effectively avoids the interference of over-fitting problem and noise data on prediction results. 2) Migrate learning. This section analyzes two different but related project data sets and reduces the impact of data distribution differences. 3) Artificial neural networks. According to class imbalance problems of the data set, using artificial neural network and dynamic selection training samples reduce the influence of prediction results because of the positive and negative samples data. The data set of the Relink project and AEEEM is studied to evaluate the performance of the f-measure and the ROC curve and AUC calculation. Experiments show that the model has high predictive performance

    Study of Time-Frequency Domain Characteristics of the Total Column Ozone in China Based on Wavelet Analysis

    No full text
    Ozone is a very important trace gas in the atmosphere, it is like a “double-edged sword”. Because the ozone in the stratosphere can effectively help the earth’s organisms to avoid the sun’s ultraviolet radiation damage, the ozone near the ground causes pollution. Therefore, it is essential to explore the time-frequency domain variation characteristics of total column ozone and have a better understanding of its cyclic variation. In this paper, based on the monthly scale dataset of total column ozone (TCO) (September 2002 to February 2023) from Atmospheric Infrared Sounder (AIRS) carried by NASA’s Aqua satellite, linear regression, coefficient of variation, Mann-Kendall (M-K) mutation tests, wavelet analysis, and empirical orthogonal function decomposition (EOF) analysis were used to analyze the variation characteristics of the TCO in China from the perspectives of time domain, frequency domain, and spatial characteristics. Finally, this study predicted the future of TCO data based on the seasonal autoregressive integrated moving average (SARIMA) model in the time series algorithm. The results showed the following: (1) From 2003 to 2022, the TCO in China showed a slight downward trend, with an average annual change rate of −0.29 DU/a; the coefficient of variation analysis found that TCO had the smallest intra-year fluctuations in 2008 and the largest intra-year fluctuations in 2005. (2) Using the M-K mutation test, it was found that there was a mutation point in the total amount of column ozone in 2016. (3) Using wavelet analysis to analyze the frequency domain characteristics of the TCO, it was observed that TCO variation in China had a combination of 14-year, 6-year, and 4-year main cycles, where 14 years is the first main cycle with a 10-year cycle and 6 years is the second main cycle with a 4-year cycle. (4) The spatial distribution characteristics of the TCO in China were significantly different in each region, showing a distribution characteristic of being high in the northeast and low in the southwest. (5) Based on the EOF analysis of the TCO in China, it was found that the variance contribution rate of the first mode was as high as 52.85%, and its spatial distribution of eigenvectors showed a “-” distribution. Combined with the trend analysis of the time coefficient, this showed that the TCO in China has declined in the past 20 years. (6) The SARIMA model with the best parameters of (1, 1, 2) × (0, 1, 2, 12) based on the training on the TCO data was used for prediction, and the final model error rate was calculated as 1.34% using the mean absolute percentage error (MAPE) index, indicating a good model fit

    Time–Frequency Characteristics of Global SST Anomalies in the Past 100 Years: A Metrological Approach

    No full text
    To comprehensively explore the characteristics of global SST anomalies, a novel time–frequency combination method, based on the COBE data and NCEP/NCAR reanalysis products in the past 100 years, was developed. From the view of the time domain, the global SST generally showed an upward trend from 1920 to 2019, the upward trend was significant after 1988, and the growth mutation occurred in 1930, according to the Mann–Kendall (MK) mutation test. Moreover, we extracted spatiotemporal modes of SST anomalies’ variability by empirical orthogonal function (EOF) analysis and obtained global spatial EOFs that closely correspond to regionally defined climate modes. Our results demonstrated that El Niño–Southern Oscillation (ENSO) is the typical character for the first mode of SST anomaly EOF, and Atlantic multidecadal oscillation (AMO) for the second. From the view of the frequency domain, our data suggested that there is a multi-period nesting phenomenon in global SST variations, in which the first main cycle with the most obvious oscillation was a 30-year cycle and changed in 20-year cycles, and the second cycle was a 15-year cycle and changed in 10-year cycles through wavelet analysis. As for the perspective of time–frequency characteristics, the dominant period of ENSO in the first mode of EOF is 4 years, obtained through filtering and cross wavelet transform. In addition, SST anomalies will maintain an upward trend for the next 60 months, according to the seasonal autoregressive integrated moving average (SARIMA) model, which has the potential value for predicting ENSO

    A method of constructing a dynamic chart depth model for coastal areas

    No full text
    The depth is important for vessel navigation at sea. Currently, most vessels use electronic navigation charts to navigate at sea. In coastal areas, especially close to shallow water areas, the dynamic change of the water level is very important to safe navigation. Ships calculate the change of water level by using up-to-date tide tables, to obtain the dynamic water depth in the channels. However, the depth caused by the tide and non-tidal components may reach several meters in some seas, causing the dynamic depth below the safety depth, which can easily lead to grounding of vessels stranding accidents. The channel is regularly dredged to achieve navigational depth. Without regular dredging, the offshore non-channel area becomes the common area of ship grounding. The dynamic chart depth model studied in this article can provide real-time depth, which serves the ships navigation in the non-channel. The model incorporates the chart depth and the dynamic water levels on the same reference datum. The chart depth is from the electronic navigational chart depth. The dynamic water levels are constructed by the simulated tidal levels and continuous series of nontidal residual. We then designed a deviation correction method to reduce the discrepancy of the simulated tidal level with the actual water level, including datum offset correction and residual water level correction. Finally, by merging the revised dynamic water levels with the electronic navigational chart depth, we obtained the dynamic chart depth model of the study region

    Widespread transcript shortening through alternative polyadenylation in secretory cell differentiation

    No full text
    Alternative polyadenylation generates multiple mRNA isoforms with different 3′ UTR sizes. Here the authors report global 3′ UTR shortening that is coupled to increased expression of secretory pathway genes in secretory cell differentiation
    corecore