445 research outputs found

    Computing the homology of basic semialgebraic sets in weak exponential time

    Get PDF
    We describe and analyze an algorithm for computing the homology (Betti numbers and torsion coefficients) of basic semialgebraic sets which works in weak exponential time. That is, out of a set of exponentially small measure in the space of data the cost of the algorithm is exponential in the size of the data. All algorithms previously proposed for this problem have a complexity which is doubly exponential (and this is so for almost all data)

    Non-acyclicity of coset lattices and generation of finite groups

    Get PDF

    Hybrid Symbolic-Numeric Computing in Linear and Polynomial Algebra

    Get PDF
    In this thesis, we introduce hybrid symbolic-numeric methods for solving problems in linear and polynomial algebra. We mainly address the approximate GCD problem for polynomials, and problems related to parametric and polynomial matrices. For symbolic methods, our main concern is their complexity and for the numerical methods we are more concerned about their stability. The thesis consists of 5 articles which are presented in the following order: Chapter 1, deals with the fundamental notions of conditioning and backward error. Although our results are not novel, this chapter is a novel explication of conditioning and backward error that underpins the rest of the thesis. In Chapter 2, we adapt Victor Y. Pan\u27s root-based algorithm for finding approximate GCD to the case where the polynomials are expressed in Bernstein bases. We use the numerically stable companion pencil of G. F. Jónsson to compute the roots, and the Hopcroft-Karp bipartite matching method to find the degree of the approximate GCD. We offer some refinements to improve the process. In Chapter 3, we give an algorithm with similar idea to Chapter 2, which finds an approximate GCD for a pair of approximate polynomials given in a Lagrange basis. More precisely, we suppose that these polynomials are given by their approximate values at distinct known points. We first find each of their roots by using a Lagrange basis companion matrix for each polynomial. We introduce new clustering algorithms and use them to cluster the roots of each polynomial to identify multiple roots, and then marry the two polynomials using a Maximum Weight Matching (MWM) algorithm, to find their GCD. In Chapter 4, we define ``generalized standard triples\u27\u27 X, zC1 - C0, Y of regular matrix polynomials P(z) in order to use the representation X(zC1 - C0)-1 Y=P-1(z). This representation can be used in constructing algebraic linearizations; for example, for H(z) = z A(z)B(z) + C from linearizations for A(z) and B(z). This can be done even if A(z) and B(z) are expressed in differing polynomial bases. Our main theorem is that X can be expressed using the coefficients of 1 in terms of the relevant polynomial basis. For convenience we tabulate generalized standard triples for orthogonal polynomial bases, the monomial basis, and Newton interpolational bases; for the Bernstein basis; for Lagrange interpolational bases; and for Hermite interpolational bases. We account for the possibility of common similarity transformations. We give explicit proofs for the less familiar bases. Chapter 5 is devoted to parametric linear systems (PLS) and related problems, from a symbolic computational point of view. PLS are linear systems of equations in which some symbolic parameters, that is, symbols that are not considered to be candidates for elimination or solution in the course of analyzing the problem, appear in the coefficients of the system. We assume that the symbolic parameters appear polynomially in the coefficients and that the only variables to be solved for are those of the linear system. It is well-known that it is possible to specify a covering set of regimes, each of which is a semi-algebraic condition on the parameters together with a solution description valid under that condition.We provide a method of solution that requires time polynomial in the matrix dimension and the degrees of the polynomials when there are up to three parameters. Our approach exploits the Hermite and Smith normal forms that may be computed when the system coefficient domain is mapped to the univariate polynomial domain over suitably constructed fields. Our approach effectively identifies intrinsic singularities and ramification points where the algebraic and geometric structure of the matrix changes. Specially parametric eigenvalue problems can be addressed as well. Although we do not directly address the problem of computing the Jordan form, our approach allows the construction of the algebraic and geometric eigenvalue multiplicities revealed by the Frobenius form, which is a key step in the construction of the Jordan form of a matrix

    Affinity-Based Reinforcement Learning : A New Paradigm for Agent Interpretability

    Get PDF
    The steady increase in complexity of reinforcement learning (RL) algorithms is accompanied by a corresponding increase in opacity that obfuscates insights into their devised strategies. Methods in explainable artificial intelligence seek to mitigate this opacity by either creating transparent algorithms or extracting explanations post hoc. A third category exists that allows the developer to affect what agents learn: constrained RL has been used in safety-critical applications and prohibits agents from visiting certain states; preference-based RL agents have been used in robotics applications and learn state-action preferences instead of traditional reward functions. We propose a new affinity-based RL paradigm in which agents learn strategies that are partially decoupled from reward functions. Unlike entropy regularisation, we regularise the objective function with a distinct action distribution that represents a desired behaviour; we encourage the agent to act according to a prior while learning to maximise rewards. The result is an inherently interpretable agent that solves problems with an intrinsic affinity for certain actions. We demonstrate the utility of our method in a financial application: we learn continuous time-variant compositions of prototypical policies, each interpretable by its action affinities, that are globally interpretable according to customers’ financial personalities. Our method combines advantages from both constrained RL and preferencebased RL: it retains the reward function but generalises the policy to match a defined behaviour, thus avoiding problems such as reward shaping and hacking. Unlike Boolean task composition, our method is a fuzzy superposition of different prototypical strategies to arrive at a more complex, yet interpretable, strategy.publishedVersio

    Towards Improved Homomorphic Encryption for Privacy-Preserving Deep Learning

    Get PDF
    Mención Internacional en el título de doctorDeep Learning (DL) has supposed a remarkable transformation for many fields, heralded by some as a new technological revolution. The advent of large scale models has increased the demands for data and computing platforms, for which cloud computing has become the go-to solution. However, the permeability of DL and cloud computing are reduced in privacy-enforcing areas that deal with sensitive data. These areas imperatively call for privacy-enhancing technologies that enable responsible, ethical, and privacy-compliant use of data in potentially hostile environments. To this end, the cryptography community has addressed these concerns with what is known as Privacy-Preserving Computation Techniques (PPCTs), a set of tools that enable privacy-enhancing protocols where cleartext access to information is no longer tenable. Of these techniques, Homomorphic Encryption (HE) stands out for its ability to perform operations over encrypted data without compromising data confidentiality or privacy. However, despite its promise, HE is still a relatively nascent solution with efficiency and usability limitations. Improving the efficiency of HE has been a longstanding challenge in the field of cryptography, and with improvements, the complexity of the techniques has increased, especially for non-experts. In this thesis, we address the problem of the complexity of HE when applied to DL. We begin by systematizing existing knowledge in the field through an in-depth analysis of state-of-the-art for privacy-preserving deep learning, identifying key trends, research gaps, and issues associated with current approaches. One such identified gap lies in the necessity for using vectorized algorithms with Packed Homomorphic Encryption (PaHE), a state-of-the-art technique to reduce the overhead of HE in complex areas. This thesis comprehensively analyzes existing algorithms and proposes new ones for using DL with PaHE, presenting a formal analysis and usage guidelines for their implementation. Parameter selection of HE schemes is another recurring challenge in the literature, given that it plays a critical role in determining not only the security of the instantiation but also the precision, performance, and degree of security of the scheme. To address this challenge, this thesis proposes a novel system combining fuzzy logic with linear programming tasks to produce secure parametrizations based on high-level user input arguments without requiring low-level knowledge of the underlying primitives. Finally, this thesis describes HEFactory, a symbolic execution compiler designed to streamline the process of producing HE code and integrating it with Python. HEFactory implements the previous proposals presented in this thesis in an easy-to-use tool. It provides a unique architecture that layers the challenges associated with HE and produces simplified operations interpretable by low-level HE libraries. HEFactory significantly reduces the overall complexity to code DL applications using HE, resulting in an 80% length reduction from expert-written code while maintaining equivalent accuracy and efficiency.El aprendizaje profundo ha supuesto una notable transformación para muchos campos que algunos han calificado como una nueva revolución tecnológica. La aparición de modelos masivos ha aumentado la demanda de datos y plataformas informáticas, para lo cual, la computación en la nube se ha convertido en la solución a la que recurrir. Sin embargo, la permeabilidad del aprendizaje profundo y la computación en la nube se reduce en los ámbitos de la privacidad que manejan con datos sensibles. Estas áreas exigen imperativamente el uso de tecnologías de mejora de la privacidad que permitan un uso responsable, ético y respetuoso con la privacidad de los datos en entornos potencialmente hostiles. Con este fin, la comunidad criptográfica ha abordado estas preocupaciones con las denominadas técnicas de la preservación de la privacidad en el cómputo, un conjunto de herramientas que permiten protocolos de mejora de la privacidad donde el acceso a la información en texto claro ya no es sostenible. Entre estas técnicas, el cifrado homomórfico destaca por su capacidad para realizar operaciones sobre datos cifrados sin comprometer la confidencialidad o privacidad de la información. Sin embargo, a pesar de lo prometedor de esta técnica, sigue siendo una solución relativamente incipiente con limitaciones de eficiencia y usabilidad. La mejora de la eficiencia del cifrado homomórfico en la criptografía ha sido todo un reto, y, con las mejoras, la complejidad de las técnicas ha aumentado, especialmente para los usuarios no expertos. En esta tesis, abordamos el problema de la complejidad del cifrado homomórfico cuando se aplica al aprendizaje profundo. Comenzamos sistematizando el conocimiento existente en el campo a través de un análisis exhaustivo del estado del arte para el aprendizaje profundo que preserva la privacidad, identificando las tendencias clave, las lagunas de investigación y los problemas asociados con los enfoques actuales. Una de las lagunas identificadas radica en el uso de algoritmos vectorizados con cifrado homomórfico empaquetado, que es una técnica del estado del arte que reduce el coste del cifrado homomórfico en áreas complejas. Esta tesis analiza exhaustivamente los algoritmos existentes y propone nuevos algoritmos para el uso de aprendizaje profundo utilizando cifrado homomórfico empaquetado, presentando un análisis formal y unas pautas de uso para su implementación. La selección de parámetros de los esquemas del cifrado homomórfico es otro reto recurrente en la literatura, dado que juega un papel crítico a la hora de determinar no sólo la seguridad de la instanciación, sino también la precisión, el rendimiento y el grado de seguridad del esquema. Para abordar este reto, esta tesis propone un sistema innovador que combina la lógica difusa con tareas de programación lineal para producir parametrizaciones seguras basadas en argumentos de entrada de alto nivel sin requerir conocimientos de bajo nivel de las primitivas subyacentes. Por último, esta tesis propone HEFactory, un compilador de ejecución simbólica diseñado para agilizar el proceso de producción de código de cifrado homomórfico e integrarlo con Python. HEFactory es la culminación de las propuestas presentadas en esta tesis, proporcionando una arquitectura única que estratifica los retos asociados con el cifrado homomórfico, produciendo operaciones simplificadas que pueden ser interpretadas por bibliotecas de bajo nivel. Este enfoque permite a HEFactory reducir significativamente la longitud total del código, lo que supone una reducción del 80% en la complejidad de programación de aplicaciones de aprendizaje profundo que usan cifrado homomórfico en comparación con el código escrito por expertos, manteniendo una precisión equivalente.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidenta: María Isabel González Vasco.- Secretario: David Arroyo Guardeño.- Vocal: Antonis Michala

    An Investigation of Students\u27 Use and Understanding of Evaluation Strategies

    Get PDF
    One expected outcome of physics instruction is that students develop quantitative reasoning skills, including evaluation of problem solutions. To investigate students’ use of evaluation strategies, we developed and administered tasks prompting students to check the validity of a given expression. We collected written (N\u3e673) and interview (N=31) data at the introductory, sophomore, and junior levels. Tasks were administered in three different physics contexts: the velocity of a block at the bottom of an incline with friction, the electric field due to three point charges of equal magnitude, and the final velocities of two masses in an elastic collision. Responses were analyzed using modified grounded theory and phenomenology. In these three contexts, we explored different facets of students’ use and understanding of evaluation strategies. First, we document and analyze the various evaluation strategies students use when prompted, comparing to canonical strategies. Second, we describe how the identified strategies relate to prior work, with particular emphasis on how a strategy we describe as grouping relates to the phenomenon of chunking as described in cognitive science. Finally, we examine how the prevalence of these strategies varies across different levels of the physics curriculum. From our quantitative data, we found that while all the surveyed student populations drew from the same set of evaluation strategies, the percentage of students who used sophisticated evaluation strategies was higher in the sophomore and junior/senior student populations than in the first-year population. From our case studies of two pair interviews (one pair of first years, and one pair of juniors), we found that that while evaluating an expression, both juniors and first-years performed similar actions. However, while the first-year students focused on computation and checked for arithmetic consistency with the laws of physics, juniors checked for computational correctness and probed whether the equation accurately described the physical world and obeyed the laws of physics. Our case studies suggest that a key difference between expert and novice evaluation is that experts extract physical meaning from their result and make sense of them by comparing them to other representations of laws of physics, and real-life experience. We conclude with remarks including implications for classroom instruction as well as suggestions for future work
    • …
    corecore