8 research outputs found

    Involutive Bases Algorithm Incorporating F5 Criterion

    Full text link
    Faugere's F5 algorithm is the fastest known algorithm to compute Groebner bases. It has a signature-based and an incremental structure that allow to apply the F5 criterion for deletion of unnecessary reductions. In this paper, we present an involutive completion algorithm which outputs a minimal involutive basis. Our completion algorithm has a nonincremental structure and in addition to the involutive form of Buchberger's criteria it applies the F5 criterion whenever this criterion is applicable in the course of completion to involution. In doing so, we use the G2V form of the F5 criterion developed by Gao, Guan and Volny IV. To compare the proposed algorithm, via a set of benchmarks, with the Gerdt-Blinkov involutive algorithm (which does not apply the F5 criterion) we use implementations of both algorithms done on the same platform in Maple.Comment: 24 pages, 2 figure

    Solving Polynomial Systems over Finite Fields: Improved Analysis of the Hybrid Approach

    Get PDF
    International audienceThe Polynomial System Solving (PoSSo) problem is a fundamental NP-Hard problem in computer algebra. Among others, PoSSo have applications in area such as coding theory and cryptology. Typically, the security of multivariate public-key schemes (MPKC) such as the UOV cryptosystem of Kipnis, Shamir and Patarin is directly related to the hardness of PoSSo over finite fields. The goal of this paper is to further understand the influence of finite fields on the hardness of PoSSo. To this end, we consider the so-called hybrid approach. This is a polynomial system solving method dedicated to finite fields proposed by Bettale, Faugère and Perret (Journal of Mathematical Cryptography, 2009). The idea is to combine exhaustive search with Gröbner bases. The efficiency of the hybrid approach is related to the choice of a trade-off between the two meth- ods. We propose here an improved complexity analysis dedicated to quadratic systems. Whilst the principle of the hybrid approach is simple, its careful analysis leads to rather surprising and somehow unexpected results. We prove that the optimal trade-off (i.e. num- ber of variables to be fixed) allowing to minimize the complexity is achieved by fixing a number of variables proportional to the number of variables of the system considered, denoted n. Under some nat- ural algebraic assumption, we show that the asymptotic complexity of the hybrid approach is 2^{n(3.31−3.62 log_2(q))} , where q is the size of the field (under the condition in particular that log(q) 2). We have been able to quantify the gain provided by the hybrid approach compared to a direct Gröbner basis method. For quadratic systems, we show (assuming a natural algebraic as- sumption) that this gain is exponential in the number of variables. Asymptotically, the gain is 2^{1.49 n} when both n and q grow to infinity and log(q) << n

    Розробка алгоритму з покращеною релевантністю локалізації координат вектора для інтелектуальних сенсорів

    Get PDF
    There are sensors of vector quantities whose field characteristics are described by the equations of quadrics. These sensors have improved sensitivity and smaller dimensions, the "payment" for which is, in fact, the non-linearity of field characteristics. In order to use such sensors, one has to solve the system of three quadric equations. Given the labor-intensity of this process, the sensors are designed intelligent – a finished device includes microcontrollers or other units that are able to process results of measurements. These devices are characterized by "curtailed" software and hardware capacities that necessitate the development of algorithms and their implementations with regard to these constraints.Classic algorithms for solving the systems of polynomial equations are not appropriate because of their violating the requirements to minimal resource consumption. The search for solutions of the systems of quadric equations is carried out in two stages – first, numerical fields that potentially contain intersections are localized, and then in these regions the search is conducted for accurate solutions by numerical methods. The success of using numerical methods depends on the quality of the conducted localization of solutions. The means of localization are not sufficiently worked out. Earlier, an algorithm was developed using interval arithmetic, the implementation of which by a microcontroller of the ARM Cortex-M4 architecture proved its capacity to find all, without exception, regions with the sought solutions of a system of equations, however, in addition to the "useful" regions, this algorithm finds as well the regions that do not actually contain the solutions. This fact led to unnecessary waste of time trying to find exact solutions in the regions where they do not exist.Thus, there was a need to search for alternative ways to the localization of solutions for the systems of quadrics equations. It is natural to search for these methods, based on the properties of quadrics in particular and continuous differentiable functions in general. A foundation of the proposed algorithm is the fact that a function in a closed region acquires its maximum or minimum value either at the borders or in critical points. If we accept, as a closed region, a rectangular parallelepiped, then its boundaries are its six sides, the boundaries of sides are its edges, and the boundaries of edges are the tops of parallelepiped. On the sides of the parallelepiped, function of three variables is reduced to function of two variables, on the edges – to functions of one variable. In the case of quadrics, finding the critical points of function on the edges comes down to solving a linear equation, and the critical points of function on the sides – to solving a system of two linear equations with two unknowns. Therefore, it is sufficient to check the signs of function at the tops of rectangular parallelepiped and those critical points of function that belong to the examined region. If in all these points the signs of function are the same, then there is no any point inside where function takes the 0 value. Thus, checking the signs of all functions that represent the left parts of the quadrics equation allows us to "reject" the regions, where there possible may not be any points of intersection. Instead of remembering the values of functions (valid numbers), it is sufficient to keep the signs of functions (one bit), which provides for the less consumption of operative memory. The tests proved that the proposed new algorithm is applicable for the implementation in the micro programming software, thus providing for a higher relevance of the found regions in comparison with the algorithm-analogue. An increase in relevancy is explained by the fact that interval arithmetic always implies overstated evaluations because, as the lower boundaries of intervals, the minimum permissible values are accepted, and as the upper limits – maximum permissible values. Checking the signs of functions in the selected points is free from the revaluation of results. The new algorithm somewhat deteriorated performance of permanent memory and execution time, however, these costs are compensated for by the further search for the solutions for a smaller number of irrelevant regions.Исследована проблема недостаточного развития алгоритмов, применимых для реализации в микропрограммном обеспечении для поиска точек пересечения квадрик. Разработан, реализован и изучен алгоритм локализации точек пересечения квадрик на основе свойств непрерывных дифференцируемых функций в замкнутой области. Новым алгоритмом достигается релевантность результатов выше, чем у его единственного аналога. Результаты предназначены для интеллектуальных сенсоровДосліджено проблему недостатнього розвитку алгоритмів, реалізованих у мікропрограмному забезпеченні для пошуку точок перетину квадрик. Розроблено, реалізовано та досліджено алгоритм локалізації точок перетину квадрик на основі властивостей неперервних диференційовних функцій у замкнутій області. Новим алгоритмом досягається вища релевантність результатів, ніж його єдиним аналогом. Результати призначені для інтелектуальних сенсорів векторних величи

    Biologically Relevant Classes of Boolean Functions

    Get PDF
    A large influx of experimental data has prompted the development of innovative computational techniques for modeling and reverse engineering biological networks. While finite dynamical systems, in particular Boolean networks, have gained attention as relevant models of network dynamics, not all Boolean functions reflect the behaviors of real biological systems. In this work, we focus on two classes of Boolean functions and study their applicability as biologically relevant network models: the nested and partially nested canalyzing functions. We begin by analyzing the nested canalyzing functions} (NCFs), which have been proposed as gene regulatory network models due to their stability properties. We introduce two biologically motivated measures of network stability, the average height and average cycle length on a state space graph and show that, on average, networks comprised of NCFs are more stable than general Boolean networks. Next, we introduce the partially nested canalyzing functions (PNCFs), a generalization of the NCFs, and the nested canalyzing depth, which measures the extent to which it retains a nested canalyzing structure. We characterize the structure of functions with a given depth and compute the expected activities and sensitivities of the variables. This analysis quantifies how canalyzation leads to higher stability in Boolean networks. We find that functions become decreasingly sensitive to input perturbations as the canalyzing depth increases, but exhibit rapidly diminishing returns in stability. Additionally, we show that as depth increases, the dynamics of networks using these functions quickly approach the critical regime, suggesting that real networks exhibit some degree of canalyzing depth, and that NCFs are not significantly better than PNCFs of sufficient depth for many applications to biological networks. Finally, we propose a method for the reverse engineering of networks of PNCFs using techniques from computational algebra. Given discretized time series data, this method finds a network model using PNCFs. Our ability to use these functions in reverse engineering applications further establishes their relevance as biological network models
    corecore