6 research outputs found

    The Evolution of Internal Representation

    Get PDF
    To develop an appropriate internal representation, a deterministic learning algorithm that has an ability to adjust not only weights but also the number of adopted hidden nodes is proposed. The key mechanisms are (1) the recruiting mechanism that recruits proper extra hidden nodes, and (2) the reasoning mechanism that prunes potentially irrelevant hidden nodes. This learning algorithm can make use of external environmental clues to develop an internal representation appropriate for the required mapping. The encoding problem and the parity problem is used to demonstrate the performance of the proposed algorithm. The experimental results are clearly positive

    The Mathematical Programming and the Rule Extraction from Layered Feed-forward Neural Networks

    Get PDF
    We propose a mathematical programming methodology for identifying and examining regression rules extracted from layered feed-forward neural networks. The area depicted in the rule premise covers a convex polyhedron in the input space, and the adopted approximation function for the output value is a multivariate polynomial function of x, the outside stimulus input. The mathematical programming analysis, instead of a data analysis, is proposed for identifying the convex polyhedron associated with each rule. Moreover, the mathematical programming analysis is proposed for examining the extracted rules to explore features. An implementation test on bond pricing rule extraction lends support to the proposed methodology

    Information Pricing Factor or Computing Bias

    No full text
    [[abstract]]The prior studies note that the floating-point exception (FPE) in computing may lead to under-estimation of the probability of informed trading (PIN), which interests market microstructure empiricists. This study further finds that the FPE bias may result in the underestimated coefficients of market beta in the regression for assets pricing. Namely, the FPE bias may contaminate the tests for pricing, and should be eliminated in the further studies. After eliminating the FPE bias, we document that the PIN factor appear to be a significant factor only during the period of medium level of market liquidity

    The Rule Extraction from Multi-layer Feed-forward Neural Networks

    No full text
    神經網路已經被成功地應用於解決各種分類及函數近似的問題,尤其因為神經網路是個萬能的近似器(universal approximator),所以對於函數近似的問題效果更為顯著。以往對於此類問題雖然多數以線性的分析工具為主,但是實際上多數問題本質上是非線性的,所以對於非線性分析工具的需求其實是很大的。自1986年起,神經網路本身的運作一直被視為一個黑箱作業,難以判斷網路學習結果的合理性,更無法有效地幫助使用者增進其知識,因此提供一套合理及有效的神經網路分析方法是重要。 本文提出一套分析神網路系統的方法;利用線性規劃的技巧萃取及分析網路中的規則(rule),而不需要對任何資料集做分析;進而利用統計無母數方法-符號檢定-歸納出網路中的知識。以債券評價為例,驗證此方法的可行性,實證結果亦顯示此方法所萃取出來的規則是合理的,且由這些萃取出的規則中,所歸納出來有關債券評價的知識多數是合理的。Neural networks have been successfully applied to solve a variety of application problems including classification and function approximation. They are especially useful for function approximation problems because they have been shown to be uni-versal approximators. In the past, for function approximation problems, they were mainly analyzed via tools of linear analyses. However, most of the function approxi-mation problems needed tools of nonlinear analyses in fact. Thus, there is the much demand for tools of nonlinear analyses. Since 1986, the neural network is considered a black box. It is hard to determine if the learning result of a neural network is rea-sonable, and the network can not effectively help users to develop the domain knowl-edge. Thus, it is important to supply a reasonable and effective analytic method of the neural network. Here, we propose an analytic method of the neural network. It can extract rules from the neural network and analyze them via the Linear Programming and does not depend on any data analysis. Then we can generalize domain knowledge from these rules via the sign test, a statistical non-parameter method. We take the bond-pricing as an instance to examine the feasibility of our proposed method. The result shows that these extracted rules are reasonable by our method and that these generalized domain knowledge from these rules is also reasonable

    A computing bias in estimating the probability of informed trading

    No full text
    This study identifies a factor that leads to a bias in estimating the probability of informed trading (PIN), a widely-used microstructure measure. It is shown that, along with the numerical maximization of the likelihood function for PIN, the floating-point exception (i.e., overflow or underflow) may eliminate feasible solutions to the actual parameters in the optimization problem. Approximately 44% of PIN estimates for recent stock market data may have been subject to a downward bias that is more pronounced for active stocks than for inactive stocks. This study develops a remedy to mitigate the resulting bias.Floating-point exception Informed trading Market microstructure

    Mechanochemistry: One Bond at a Time

    No full text
    corecore