48,073 research outputs found

    Development of symbolic algorithms for certain algebraic processes

    Get PDF
    This study investigates the problem of computing the exact greatest common divisor of two polynomials relative to an orthogonal basis, defined over the rational number field. The main objective of the study is to design and implement an effective and efficient symbolic algorithm for the general class of dense polynomials, given the rational number defining terms of their basis. From a general algorithm using the comrade matrix approach, the nonmodular and modular techniques are prescribed. If the coefficients of the generalized polynomials are multiprecision integers, multiprecision arithmetic will be required in the construction of the comrade matrix and the corresponding systems coefficient matrix. In addition, the application of the nonmodular elimination technique on this coefficient matrix extensively applies multiprecision rational number operations. The modular technique is employed to minimize the complexity involved in such computations. A divisor test algorithm that enables the detection of an unlucky reduction is a crucial device for an effective implementation of the modular technique. With the bound of the true solution not known a priori, the test is devised and carefully incorporated into the modular algorithm. The results illustrate that the modular algorithm illustrate its best performance for the class of relatively prime polynomials. The empirical computing time results show that the modular algorithm is markedly superior to the nonmodular algorithms in the case of sufficiently dense Legendre basis polynomials with a small GCD solution. In the case of dense Legendre basis polynomials with a big GCD solution, the modular algorithm is significantly superior to the nonmodular algorithms in higher degree polynomials. For more definitive conclusions, the computing time functions of the algorithms that are presented in this report have been worked out. Further investigations have also been suggested

    An improved functional link neural network for data classification

    Get PDF
    The goal of classification is to assign the pre-specified group or class to an instance based on the observed features related to that instance. The implementation of several classification models is challenging as some only work well when the underlying assumptions are satisfied. In order to generate the complex mapping between input and output space to build the arbitrary complex non-linear decision boundaries, neural networks has become prominent tool with wide range of applications. The recent techniques such as Multilayer Perceptron (MLP), standard Functional Link Neural Network (FLNN) and Chebyshev Functional Link Neural Network (CFLNN) outperformed their existing regression, multiple regression, quadratic regression, stepwise polynomials, K-nearest neighbor (K-NN), Naïve Bayesian classifier and logistic regression. This research work explores the insufficiencies of well- known CFLNN model where CFLNN utilizes functional expansion with large number of degree and coefficient value for inputs enhancement which increase computational complexity of the network. Accordingly, two alternative models namely; Genocchi Functional Link Neural Network (GFLNN) and Chebyshev Wavelets Functional Link Neural Network (CWFLNN) are proposed. The novelty of these approaches is that, GFLNN presents the functional expansions with less degree and small coefficient values to make less computational inputs for training to overcome the drawbacks of CFLNN. Whereas, CWFLNN is capable to generate more number of small coefficient value based basis functions with same degree of polynomials as compared to other polynomials and it has orthonormality condition therefore it has more accurate constant of functional expansion and can approximate the functions within the interval. These properties of CWFLNN are used to overcome the deficiencies of GFLNN. The significance of proposed models is verified by using statistical tests such as Freidman test based on accuracy ranking and pairwise comparison test. Moreover, MLP, standard FLNN and CFLNN are used for comparison. For experiments, benched marked data sets from UCI repository, SVMLIB data set and KEEL data sets are utilized. The CWFLNN reveals significant improvement (due to its generating more numbers of basis function property) in terms of classification accuracy and reduces the computational work

    Shallow Circuits with High-Powered Inputs

    Get PDF
    A polynomial identity testing algorithm must determine whether an input polynomial (given for instance by an arithmetic circuit) is identically equal to 0. In this paper, we show that a deterministic black-box identity testing algorithm for (high-degree) univariate polynomials would imply a lower bound on the arithmetic complexity of the permanent. The lower bounds that are known to follow from derandomization of (low-degree) multivariate identity testing are weaker. To obtain our lower bound it would be sufficient to derandomize identity testing for polynomials of a very specific norm: sums of products of sparse polynomials with sparse coefficients. This observation leads to new versions of the Shub-Smale tau-conjecture on integer roots of univariate polynomials. In particular, we show that a lower bound for the permanent would follow if one could give a good enough bound on the number of real roots of sums of products of sparse polynomials (Descartes' rule of signs gives such a bound for sparse polynomials and products thereof). In this third version of our paper we show that the same lower bound would follow even if one could only prove a slightly superpolynomial upper bound on the number of real roots. This is a consequence of a new result on reduction to depth 4 for arithmetic circuits which we establish in a companion paper. We also show that an even weaker bound on the number of real roots would suffice to obtain a lower bound on the size of depth 4 circuits computing the permanent.Comment: A few typos correcte

    Polynomials that Sign Represent Parity and Descartes' Rule of Signs

    Full text link
    A real polynomial P(X1,...,Xn)P(X_1,..., X_n) sign represents f:An{0,1}f: A^n \to \{0,1\} if for every (a1,...,an)An(a_1, ..., a_n) \in A^n, the sign of P(a1,...,an)P(a_1,...,a_n) equals (1)f(a1,...,an)(-1)^{f(a_1,...,a_n)}. Such sign representations are well-studied in computer science and have applications to computational complexity and computational learning theory. In this work, we present a systematic study of tradeoffs between degree and sparsity of sign representations through the lens of the parity function. We attempt to prove bounds that hold for any choice of set AA. We show that sign representing parity over {0,...,m1}n\{0,...,m-1\}^n with the degree in each variable at most m1m-1 requires sparsity at least mnm^n. We show that a tradeoff exists between sparsity and degree, by exhibiting a sign representation that has higher degree but lower sparsity. We show a lower bound of n(m2)+1n(m -2) + 1 on the sparsity of polynomials of any degree representing parity over {0,...,m1}n\{0,..., m-1\}^n. We prove exact bounds on the sparsity of such polynomials for any two element subset AA. The main tool used is Descartes' Rule of Signs, a classical result in algebra, relating the sparsity of a polynomial to its number of real roots. As an application, we use bounds on sparsity to derive circuit lower bounds for depth-two AND-OR-NOT circuits with a Threshold Gate at the top. We use this to give a simple proof that such circuits need size 1.5n1.5^n to compute parity, which improves the previous bound of 4/3n/2{4/3}^{n/2} due to Goldmann (1997). We show a tight lower bound of 2n2^n for the inner product function over {0,1}n×{0,1}n\{0,1\}^n \times \{0, 1\}^n.Comment: To appear in Computational Complexit
    corecore