17 research outputs found

    Information Analysis of Catchment Hydrologic Patterns across Temporal Scales

    Get PDF
    Catchment hydrologic cycle takes on different patterns across temporal scales. The interim between event-scale hydrologic process and mean annual water-energy correlation pattern requires further examination to justify self-consistent understanding. In this paper, the temporal scale transition revealed by observation and simulation was evaluated in an information theoretical framework named Aleatory Epistemic Uncertainty Estimation. The Aleatory Uncertainty refers to posterior uncertainty of runoff given the input variables’ observations. The Epistemic Uncertainty refers to the posterior uncertainty increase due to the imperfect observation decoding in models. Daily hydrometeorological observations in 24 catchments were aggregated from 10 days to 1 year before implementing the information analysis. Estimations of information contents and flows of hydrologic terms across temporal scales were related with the catchments’ seasonality type. It also showed that information distilled by the monthly and annual water balance models applied here did not correspond to that provided by observations around temporal scale from two months to half a year. This calls for a better understanding of seasonal hydrologic mechanism

    Reconstructing Three-decade Global Fine-Grained Nighttime Light Observations by a New Super-Resolution Framework

    Full text link
    Satellite-collected nighttime light provides a unique perspective on human activities, including urbanization, population growth, and epidemics. Yet, long-term and fine-grained nighttime light observations are lacking, leaving the analysis and applications of decades of light changes in urban facilities undeveloped. To fill this gap, we developed an innovative framework and used it to design a new super-resolution model that reconstructs low-resolution nighttime light data into high resolution. The validation of one billion data points shows that the correlation coefficient of our model at the global scale reaches 0.873, which is significantly higher than that of other existing models (maximum = 0.713). Our model also outperforms existing models at the national and urban scales. Furthermore, through an inspection of airports and roads, only our model's image details can reveal the historical development of these facilities. We provide the long-term and fine-grained nighttime light observations to promote research on human activities. The dataset is available at \url{https://doi.org/10.5281/zenodo.7859205}

    Fault diagnosis of rolling bearing using CVA based detector

    Get PDF
    There are two key problems in bearing fault diagnosis that need to be addressed, one is feature selection, the other is faulty dataset problem. On the one hand, signal decomposition methods are popular ways to decompose signal into a number of modes of interest, while the most interesting modes need to be selected to represent original signal. This procedure may easily lead to loss of important information. On the other hand, most of works adopt the faulty data to train fault diagnosis classifier, while the faulty data sets are difficult to collect in real life. Hence many existing methods are unsuitable for practical application. Moreover, a high number of researchers introduce various hybrid methods to improve the ability of original methods, which increases the complexity of fault diagnosis. To solve these problems, firstly, a canonical variate analysis (CVA) detector based on visual inspection is proposed to classify operating states. Healthy dataset obtained under normal condition is applied for building a reference model and generating a threshold. CVA transforms the unknown variable into state space and residual space, then T2 and Q metrics are used to capture the variation in the two spaces, respectively. The metrics of variable compared with reference model will determine the state of rolling bearing. Considering that the threshold of proposed detector is likely to be exceeded, and visual inspection fails to identify bearing fault automatically. Then the means of T2 and Q metrics are presented to enlarge the distance between normal and abnormal conditions to avoid those drawbacks. At last, experiment and comparison are conducted to verify the capability of the proposed work. The results demonstrate that the proposed work is simple and effective in bearing fault diagnosis

    Speckle noise removal convex method using higher-order curvature variation

    Get PDF

    Estimating Warehouse Rental Price using Machine Learning Techniques

    Get PDF
    Boosted by the growing logistics industry and digital transformation, the sharing warehouse market is undergoing a rapid development. Both supply and demand sides in the warehouse rental business are faced with market perturbations brought by unprecedented peer competitions and information transparency. A key question faced by the participants is how to price warehouses in the open market. To understand the pricing mechanism, we built a real world warehouse dataset using data collected from the classified advertisements websites. Based on the dataset, we applied machine learning techniques to relate warehouse price with its relevant features, such as warehouse size, location and nearby real estate price. Four candidate models are used here: Linear Regression, Regression Tree, Random Forest Regression and Gradient Boosting Regression Trees. The case study in the Beijing area shows that warehouse rent is closely related to its location and land price. Models considering multiple factors have better skill in estimating warehouse rent, compared to singlefactor estimation. Additionally, tree models have better performance than the linear model, with the best model (Random Forest) achieving correlation coefficient of 0.57 in the test set. Deeper investigation of feature importance illustrates that distance from the city center plays the most important role in determining warehouse price in Beijing, followed by nearby real estate price and warehouse size

    A Novel Calibrator for Electronic Transformers Based on IEC 61850

    No full text
    It is necessary for electronic transformer to make calibration before putting it into practice. To solve the problems in actual calibration process, a novel electronic transformer calibrator is designed. In principle, this system adopts both the direct method and the difference method, which are two popular methods for electronic transformer calibration, by this way the application of the system is extended with its reliability improved. In the system design, based on virtual instrument technology, LabVIEW and WinPCap toolkit are used to develop the application software, and it is able to calibrate those electronic transformers following the standard of IEC 61850. In the calculation of ratio and phase error based on fast Fourier transform, a new window function is introduced, and thus the accuracy of calibration, influenced by the frequency vibration, is improved. This research provides theoretic support and practical reference to the development of intelligent calibrator for electronic transformers. DOI: http://dx.doi.org/10.11591/telkomnika.v11i3.232

    Motion Sequence Decomposition-Based Hybrid Entropy Feature and Its Application to Fault Diagnosis of a High-Speed Automatic Mechanism

    No full text
    High-speed automatic weapons play an important role in the field of national defense. However, current research on reliability analysis of automaton principally relies on simulations due to the fact that experimental data are difficult to collect in real life. Different from rotating machinery, a high-speed automaton needs to accomplish complex motion consisting of a series of impacts. In addition to strong noise, the impacts generated by different components of the automaton will interfere with each other. There is no effective approach to cope with this in the fault diagnosis of automatic mechanisms. This paper proposes a motion sequence decomposition approach combining modern signal processing techniques to develop an effective approach to fault detection in high-speed automatons. We first investigate the entire working procedure of the automatic mechanism and calculate the corresponding action times of travel involved. The vibration signal collected from the shooting experiment is then divided into a number of impacts corresponding to action orders. Only the segment generated by a faulty component is isolated from the original impacts according to the action time of the component. Wavelet packet decomposition (WPD) is first applied on the resulting signals for investigation of energy distribution, and the components with higher energy are selected for feature extraction. Three information entropy features are utilized to distinguish various states of the automaton using empirical mode decomposition (EMD). A gray-wolf optimization (GWO) algorithm is introduced as an alternative to improve the performance of the support vector machine (SVM) classifier. We carry out shooting experiments to collect vibration data for demonstration of the proposed work. Experimental results show that the proposed work in this paper is effective for fault diagnosis of a high-speed automaton and can be applied in real applications. Moreover, the GWO is able to provide a competitive diagnosis result compared with the genetic algorithm (GA) and the particle swarm optimization (PSO) algorithm
    corecore