23 research outputs found

    Empirical Research on Value-at-Risk Methods of Chinese Stock Indexes

    Get PDF
    The Chinese stock market has been established for more than 20 years. Although it is not as mature as the highly developed western securities markets, it has a huge influence on the global economy. It is significant to study the risks of the Chinese stock market, especially the risk of stock indexes. Affected by the economic globalization today, more and more financial derivatives and financial instruments appear which may lead to the increase of related risk so that the demand of research on the risk of the financial market is also getting higher and higher. Risk measurement is a key in risk management, and its measurement methods are constantly evolving. Value at Risk (VaR) method is one of the effective methods to measure the financial risk, which is widely used in domestic and foreign financial institutions. Compared with traditional models, it has much more accuracy and reasonability and is much easier to implement. As the two main indexes in Chinese stock market, the Shanghai Composite stock index and the Shenzhen Component index are selected as the research objectives. And the loss series of the two indexes are tested through normality test, unit root test, autocorrelation test and ARCH effect test. The outcomes of these tests indicate these loss series are skewed and stationary with the effect of ARCH. Hereby, the GARCH-type models are suitable to be used to estimate VaR. The TGARCH model and the EGARCH model under the hypothesis of Student’s t-distribution and generalized error distribution are employed for the six test periods from 2011 to 2016. And it can be concluded with backtesting that all these four models (the VaR-TGARCH-t model, the VaR-TGARCH-GED model, the VaR-EGARCH-t model and the VaR-EGARCH-GED model) are appropriate for the two indexes despite the fact several models fail the Kupiec test for the period 2015-2016.For the Shenzhen Component index, the VaR-TGARCH-t model may fit it most because all numbers of violations for the six test periods fall in the confidence intervals

    Element detection and segmentation of mathematical function graphs based on improved Mask R-CNN

    Get PDF
    There are approximately 2.2 billion people around the world with varying degrees of visual impairments. Among them, individuals with severe visual impairments predominantly rely on hearing and touch to gather external information. At present, there are limited reading materials for the visually impaired, mostly in the form of audio or text, which cannot satisfy the needs for the visually impaired to comprehend graphical content. Although many scholars have devoted their efforts to investigating methods for converting visual images into tactile graphics, tactile graphic translation fails to meet the reading needs of visually impaired individuals due to image type diversity and limitations in image recognition technology. The primary goal of this paper is to enable the visually impaired to gain a greater understanding of the natural sciences by transforming images of mathematical functions into an electronic format for the production of tactile graphics. In an effort to enhance the accuracy and efficiency of graph element recognition and segmentation of function graphs, this paper proposes an MA Mask R-CNN model which utilizes MA ConvNeXt as its improved feature extraction backbone network and MA BiFPN as its improved feature fusion network. The MA ConvNeXt is a novel feature extraction network proposed in this paper, while the MA BiFPN is a novel feature fusion network introduced in this paper. This model combines the information of local relations, global relations and different channels to form an attention mechanism that is able to establish multiple connections, thus increasing the detection capability of the original Mask R-CNN model on slender and multi-type targets by combining a variety of multi-scale features. Finally, the experimental results show that MA Mask R-CNN attains an 89.6% mAP value for target detection and 72.3% mAP value for target segmentation in the instance segmentation of function graphs. This results in a 9% mAP improvement for target detection and 12.8% mAP improvement for target segmentation compared to the original Mask R-CNN

    Coordination method for DC fault current suppression and clearance in DC grids

    Get PDF
    The modular multilevel converter (MMC) based DC grid is considered as a future solution for bulk renewable energy integration and transmission. However, the high probability of DC faults and their rapid propagation speed are the main challenges of the development of DC grids. Existing research mainly focuses on the DC fault clearance methods, while the fault current suppression methods are still under researched. Additionally, the coordination method of fault current suppression and clearance needs to be optimized. In this paper, the technical characteristics of the current suppression methods are studied, based on which the coordinated methods of fault current suppression and clearance are proposed. At last, a cost comparison of these methods is presented. The research results show that the proposed strategies can reduce the cost of the protection equipment

    Neighborhood Attribute Reduction: A Multicriterion Strategy Based on Sample Selection

    No full text
    In the rough-set field, the objective of attribute reduction is to regulate the variations of measures by reducing redundant data attributes. However, most of the previous concepts of attribute reductions were designed by one and only one measure, which indicates that the obtained reduct may fail to meet the constraints given by other measures. In addition, the widely used heuristic algorithm for computing a reduct requires to scan all samples in data, and then time consumption may be too high to be accepted if the size of the data is too large. To alleviate these problems, a framework of attribute reduction based on multiple criteria with sample selection is proposed in this paper. Firstly, cluster centroids are derived from data, and then samples that are far away from the cluster centroids can be selected. This step completes the process of sample selection for reducing data size. Secondly, multiple criteria-based attribute reduction was designed, and the heuristic algorithm was used over the selected samples for computing reduct in terms of multiple criteria. Finally, the experimental results over 12 UCI datasets show that the reducts obtained by our framework not only satisfy the constraints given by multiple criteria, but also provide better classification performance and less time consumption

    An Ensemble Framework to Forest Optimization Based Reduct Searching

    No full text
    Essentially, the solution to an attribute reduction problem can be viewed as a reduct searching process. Currently, among various searching strategies, meta-heuristic searching has received extensive attention. As a new emerging meta-heuristic approach, the forest optimization algorithm (FOA) is introduced to the problem solving of attribute reduction in this study. To further improve the classification performance of selected attributes in reduct, an ensemble framework is also developed: firstly, multiple reducts are obtained by FOA and data perturbation, and the structure of those multiple reducts is symmetrical, which indicates that no order exists among those reducts; secondly, multiple reducts are used to execute voting classification over testing samples. Finally, comprehensive experiments on over 20 UCI datasets clearly validated the effectiveness of our framework: it is not only beneficial to output reducts with superior classification accuracies and classification stabilities but also suitable for data pre-processing with noise. This improvement work we have performed makes the FOA obtain better benefits in the data processing of life, health, medical and other fields

    Ensemble and Quick Strategy for Searching Reduct: A Hybrid Mechanism

    No full text
    Attribute reduction is commonly referred to as the key topic in researching rough set. Concerning the strategies for searching reduct, though various heuristics based forward greedy searchings have been developed, most of them were designed for pursuing one and only one characteristic which is closely related to the performance of reduct. Nevertheless, it is frequently expected that a justifiable searching should explicitly involves three main characteristics: (1) the process of obtaining reduct with low time consumption; (2) generate reduct with high stability; (3) acquire reduct with competent classification ability. To fill such gap, a hybrid based searching mechanism is designed, which takes the above characteristics into account. Such a mechanism not only adopts multiple fitness functions to evaluate the candidate attributes, but also queries the distance between attributes for determining whether two or more attributes can be added into the reduct simultaneously. The former may be useful in deriving reduct with higher stability and competent classification ability, and the latter may contribute to the lower time consumption of deriving reduct. By comparing with 5 state-of-the-art algorithms for searching reduct, the experimental results over 20 UCI data sets demonstrate the effectiveness of our new mechanism. This study suggests a new trend of attribute reduction for achieving a balance among various characteristics

    Combined Accelerator for Attribute Reduction: A Sample Perspective

    No full text
    In the field of neighborhood rough set, attribute reduction is considered as a key topic. Neighborhood relation and rough approximation play crucial roles in the process of obtaining the reduct. Presently, many strategies have been proposed to accelerate such process from the viewpoint of samples. However, these methods speed up the process of obtaining the reduct only from binary relation or rough approximation, and then the obtained results in time consumption may not be fully improved. To fill such a gap, a combined acceleration strategy based on compressing the scanning space of both neighborhood and lower approximation is proposed, which aims to further reduce the time consumption of obtaining the reduct. In addition, 15 UCI data sets have been selected, and the experimental results show us the following: (1) our proposed approach significantly reduces the elapsed time of obtaining the reduct; (2) compared with previous approaches, our combined acceleration strategy will not change the result of the reduct. This research suggests a new trend of attribute reduction using the multiple views

    Relationship and Accuracy Analyses of Variable Precision Multi-Granulation Rough Sets based on Tolerance Relation

    No full text
    This paper discusses the properties of optimistic, pessimistic and basic approximations of rough sets in ordinary and variable precise multi-granulation models and the ones produced by using union and intersection operations on multi-property relations deeply, and analyzes the relationships between or among them. It explores the approximate accuracy formulas and finds several inequalities to describe their relationships of those approximate accuracy formulas. It proves that approximation accuracy of incomplete variable precision multi-granulation rough sets based on tolerance relation is higher than the non-variable ones

    Integrated GNSS attitude and position determination based on an affine constrained model

    No full text
    Global Navigation Satellite System (GNSS) attitude determination and positioning play an important role in many navigation applications. However, the two GNSS-based problems are usually treated separately. This ignores the constraint information of the GNSS antenna array and the accuracy is limited. To improve the performance of navigation, an integrated attitude and position determination method based on an affine constraint model is presented. In the first part, the GNSS array model and affine constrained attitude determination method are compared with the unconstrained methods. Then the integrated attitude and position determination method is presented. The performance of the proposed method is tested with a series of static data and dynamic experimental GNSS data. The results show that the proposed method can improve the success rate of ambiguity resolution to further improve the accuracy of attitude determination and relative positioning compared to the unconstrained methods. K E
    corecore