176 research outputs found

    Using Clustering in a Cognitive Tutor to Identify Mathematical Misconceptions

    Get PDF
    We have implemented an Intelligent Tutoring System (ITS) prototype for teaching multi-column addition and subtraction to children aged 5-10, using a digitalized version of the Montessori bank game exercises. An Intelligent Tutoring System is a piece of software that teaches a certain subject to its users, and that typically uses artificial intelligence related algorithms to personalize the educational process. Our Intelligent Tutoring System focuses on collecting erroneous input from the user and analyzing it using an experimental clustering algorithm in order to find common misconceptions. The system is based on the assumption that if there is a lot of user errors that are similar, they might correspond to a misconception. To find which errors are “similar”, we use clustering. An ITS like this could support teaching by making the students become aware of their misconceptions, so that they can overcome them. Normally, ITS use bug libraries to systematize misconception handling. A bug library is a collection containing information about possible errors, that can be used to help identify these errors when encountered. Creating bug libraries takes a lot of effort, and if they could be avoided, a typical ITS implementation would take considerably less time. While we found that we could identify some misconceptions of a computer player, the clustering approach needs to be generalized further in order to enable effective application on humans. We conclude that if this approach were to be explored more in detail, it could prove to be a viable alternative to the bug library

    Optimization and inference under fuzzy numerical constraints

    Get PDF
    Εκτεταμένη έρευνα έχει γίνει στους τομείς της Ικανοποίησης Περιορισμών με διακριτά (ακέραια) ή πραγματικά πεδία τιμών. Αυτή η έρευνα έχει οδηγήσει σε πολλαπλές σημασιολογικές περιγραφές, πλατφόρμες και συστήματα για την περιγραφή σχετικών προβλημάτων με επαρκείς βελτιστοποιήσεις. Παρά ταύτα, λόγω της ασαφούς φύσης πραγματικών προβλημάτων ή ελλιπούς μας γνώσης για αυτά, η σαφής μοντελοποίηση ενός προβλήματος ικανοποίησης περιορισμών δεν είναι πάντα ένα εύκολο ζήτημα ή ακόμα και η καλύτερη προσέγγιση. Επιπλέον, το πρόβλημα της μοντελοποίησης και επίλυσης ελλιπούς γνώσης είναι ακόμη δυσκολότερο. Επιπροσθέτως, πρακτικές απαιτήσεις μοντελοποίησης και μέθοδοι βελτιστοποίησης του χρόνου αναζήτησης απαιτούν συνήθως ειδικές πληροφορίες για το πεδίο εφαρμογής, καθιστώντας τη δημιουργία ενός γενικότερου πλαισίου βελτιστοποίησης ένα ιδιαίτερα δύσκολο πρόβλημα. Στα πλαίσια αυτής της εργασίας θα μελετήσουμε το πρόβλημα της μοντελοποίησης και αξιοποίησης σαφών, ελλιπών ή ασαφών περιορισμών, καθώς και πιθανές στρατηγικές βελτιστοποίησης. Καθώς τα παραδοσιακά προβλήματα ικανοποίησης περιορισμών λειτουργούν βάσει συγκεκριμένων και προκαθορισμένων κανόνων και σχέσεων, παρουσιάζει ενδιαφέρον η διερεύνηση στρατηγικών και βελτιστοποιήσεων που θα επιτρέπουν το συμπερασμό νέων ή/και αποδοτικότερων περιορισμών. Τέτοιοι επιπρόσθετοι κανόνες θα μπορούσαν να βελτιώσουν τη διαδικασία αναζήτησης μέσω της εφαρμογής αυστηρότερων περιορισμών και περιορισμού του χώρου αναζήτησης ή να προσφέρουν χρήσιμες πληροφορίες στον αναλυτή για τη φύση του προβλήματος που μοντελοποιεί.Extensive research has been done in the areas of Constraint Satisfaction with discrete/integer and real domain ranges. Multiple platforms and systems to deal with these kinds of domains have been developed and appropriately optimized. Nevertheless, due to the incomplete and possibly vague nature of real-life problems, modeling a crisp and adequately strict satisfaction problem may not always be easy or even appropriate. The problem of modeling incomplete knowledge or solving an incomplete/relaxed representation of a problem is a much harder issue to tackle. Additionally, practical modeling requirements and search optimizations require specific domain knowledge in order to be implemented, making the creation of a more generic optimization framework an even harder problem.In this thesis, we will study the problem of modeling and utilizing incomplete and fuzzy constraints, as well as possible optimization strategies. As constraint satisfaction problems usually contain hard-coded constraints based on specific problem and domain knowledge, we will investigate whether strategies and generic heuristics exist for inferring new constraint rules. Additional rules could optimize the search process by implementing stricter constraints and thus pruning the search space or even provide useful insight to the researcher concerning the nature of the investigated problem

    Fuzzy Linear Programming in DSS for Energy System Planning

    Get PDF
    Energy system planning requires the use of planning tools. The mathematical models of real-world energy systems are usually multiperiod linear optimization programs. In these models, the objective function describes the total discounted costs of covering the demand for final energy or energy services. The demand for various forms of energy or energy services is the driving force of the models. By using such linear programming (LP) formulations, decision makers can elaborate suitable strategies for solving their planning problems, such as the development of emission reduction strategies. Uncertainties that affect the process of energy system planning can be divided into parameter and decision uncertainties. Data or parameter uncertainties can be addressed either by stochastic optimization or by the methodology of fuzzy linear programming (FLP). In addition, FLP allows explicit incorporation of decision uncertainties into a mathematical model. This paper therefore aims at evaluating the methodology of FLP with respect to the support that it offers the decision-making process in energy system planning under uncertainty. Employing the parallels between multi-objective linear programming (MOLP) and FLP, problems of FLP in decision support system applications are pointed out and solutions are offered. The proposed modifications are based on the methodology of aspiration-reservation based decision support and still enable modeling of uncertainties in a fuzzy sense. A case study is documented to show the application of the modified FLP approach

    An expert system applied to earthmoving operations and equipment selection

    Get PDF
    The thesis represents an effort to assess the current and future development of expert systems relating to civil engineering problems. It describes the development and evaluation of an Expert System (ESEMPS) that is capable of advising on earth allocation and plant selection for road construction similar to that of an expert in the domain. [Continues.

    The Second Hungarian Workshop on Image Analysis : Budapest, June 7-9, 1988.

    Get PDF

    Advanced models for simulating dwarf galaxy formation and evolution

    Get PDF

    Towards Improved Homomorphic Encryption for Privacy-Preserving Deep Learning

    Get PDF
    Mención Internacional en el título de doctorDeep Learning (DL) has supposed a remarkable transformation for many fields, heralded by some as a new technological revolution. The advent of large scale models has increased the demands for data and computing platforms, for which cloud computing has become the go-to solution. However, the permeability of DL and cloud computing are reduced in privacy-enforcing areas that deal with sensitive data. These areas imperatively call for privacy-enhancing technologies that enable responsible, ethical, and privacy-compliant use of data in potentially hostile environments. To this end, the cryptography community has addressed these concerns with what is known as Privacy-Preserving Computation Techniques (PPCTs), a set of tools that enable privacy-enhancing protocols where cleartext access to information is no longer tenable. Of these techniques, Homomorphic Encryption (HE) stands out for its ability to perform operations over encrypted data without compromising data confidentiality or privacy. However, despite its promise, HE is still a relatively nascent solution with efficiency and usability limitations. Improving the efficiency of HE has been a longstanding challenge in the field of cryptography, and with improvements, the complexity of the techniques has increased, especially for non-experts. In this thesis, we address the problem of the complexity of HE when applied to DL. We begin by systematizing existing knowledge in the field through an in-depth analysis of state-of-the-art for privacy-preserving deep learning, identifying key trends, research gaps, and issues associated with current approaches. One such identified gap lies in the necessity for using vectorized algorithms with Packed Homomorphic Encryption (PaHE), a state-of-the-art technique to reduce the overhead of HE in complex areas. This thesis comprehensively analyzes existing algorithms and proposes new ones for using DL with PaHE, presenting a formal analysis and usage guidelines for their implementation. Parameter selection of HE schemes is another recurring challenge in the literature, given that it plays a critical role in determining not only the security of the instantiation but also the precision, performance, and degree of security of the scheme. To address this challenge, this thesis proposes a novel system combining fuzzy logic with linear programming tasks to produce secure parametrizations based on high-level user input arguments without requiring low-level knowledge of the underlying primitives. Finally, this thesis describes HEFactory, a symbolic execution compiler designed to streamline the process of producing HE code and integrating it with Python. HEFactory implements the previous proposals presented in this thesis in an easy-to-use tool. It provides a unique architecture that layers the challenges associated with HE and produces simplified operations interpretable by low-level HE libraries. HEFactory significantly reduces the overall complexity to code DL applications using HE, resulting in an 80% length reduction from expert-written code while maintaining equivalent accuracy and efficiency.El aprendizaje profundo ha supuesto una notable transformación para muchos campos que algunos han calificado como una nueva revolución tecnológica. La aparición de modelos masivos ha aumentado la demanda de datos y plataformas informáticas, para lo cual, la computación en la nube se ha convertido en la solución a la que recurrir. Sin embargo, la permeabilidad del aprendizaje profundo y la computación en la nube se reduce en los ámbitos de la privacidad que manejan con datos sensibles. Estas áreas exigen imperativamente el uso de tecnologías de mejora de la privacidad que permitan un uso responsable, ético y respetuoso con la privacidad de los datos en entornos potencialmente hostiles. Con este fin, la comunidad criptográfica ha abordado estas preocupaciones con las denominadas técnicas de la preservación de la privacidad en el cómputo, un conjunto de herramientas que permiten protocolos de mejora de la privacidad donde el acceso a la información en texto claro ya no es sostenible. Entre estas técnicas, el cifrado homomórfico destaca por su capacidad para realizar operaciones sobre datos cifrados sin comprometer la confidencialidad o privacidad de la información. Sin embargo, a pesar de lo prometedor de esta técnica, sigue siendo una solución relativamente incipiente con limitaciones de eficiencia y usabilidad. La mejora de la eficiencia del cifrado homomórfico en la criptografía ha sido todo un reto, y, con las mejoras, la complejidad de las técnicas ha aumentado, especialmente para los usuarios no expertos. En esta tesis, abordamos el problema de la complejidad del cifrado homomórfico cuando se aplica al aprendizaje profundo. Comenzamos sistematizando el conocimiento existente en el campo a través de un análisis exhaustivo del estado del arte para el aprendizaje profundo que preserva la privacidad, identificando las tendencias clave, las lagunas de investigación y los problemas asociados con los enfoques actuales. Una de las lagunas identificadas radica en el uso de algoritmos vectorizados con cifrado homomórfico empaquetado, que es una técnica del estado del arte que reduce el coste del cifrado homomórfico en áreas complejas. Esta tesis analiza exhaustivamente los algoritmos existentes y propone nuevos algoritmos para el uso de aprendizaje profundo utilizando cifrado homomórfico empaquetado, presentando un análisis formal y unas pautas de uso para su implementación. La selección de parámetros de los esquemas del cifrado homomórfico es otro reto recurrente en la literatura, dado que juega un papel crítico a la hora de determinar no sólo la seguridad de la instanciación, sino también la precisión, el rendimiento y el grado de seguridad del esquema. Para abordar este reto, esta tesis propone un sistema innovador que combina la lógica difusa con tareas de programación lineal para producir parametrizaciones seguras basadas en argumentos de entrada de alto nivel sin requerir conocimientos de bajo nivel de las primitivas subyacentes. Por último, esta tesis propone HEFactory, un compilador de ejecución simbólica diseñado para agilizar el proceso de producción de código de cifrado homomórfico e integrarlo con Python. HEFactory es la culminación de las propuestas presentadas en esta tesis, proporcionando una arquitectura única que estratifica los retos asociados con el cifrado homomórfico, produciendo operaciones simplificadas que pueden ser interpretadas por bibliotecas de bajo nivel. Este enfoque permite a HEFactory reducir significativamente la longitud total del código, lo que supone una reducción del 80% en la complejidad de programación de aplicaciones de aprendizaje profundo que usan cifrado homomórfico en comparación con el código escrito por expertos, manteniendo una precisión equivalente.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidenta: María Isabel González Vasco.- Secretario: David Arroyo Guardeño.- Vocal: Antonis Michala

    Reliable visual analytics, a prerequisite for outcome assessment of engineering systems

    Get PDF
    Various evaluation approaches exist for multi-purpose visual analytics (VA) frameworks. They are based on empirical studies in information visualization or on community activities, for example, VA Science and Technology Challenge (2006-2014) created as a community evaluation resource to “decide upon the right metrics to use, and the appropriate implementation of those metrics including datasets and evaluators” 1 . In this paper, we propose to use evaluated VA environments for computer-based processes or systems with the main goal of aligning user plans, system models and software results. For this purpose, trust in VA outcome should be established, which can be done by following the (meta-)design principles of a human-centered verification and validation assessment and also in dependence on users’ task models and interaction styles, since the possibility to work with the visualization interactively is an integral part of VA. To define reliable VA, we point out various dimensions of reliability along with their quality criteria, requirements, attributes and metrics. Several software packages are used to illustrate the concepts

    Advances in Methodology and Applications of Decision Support Systems

    Get PDF
    These Proceedings are composed of a selection of papers of the Workshop on Advances in Methodology and Applications of Decision Support Systems, organized by the System and Decision Sciences (SDS) Program of IIASA and the Japan Institute of Systems Research (JISR). The workshop was held at IIASA on August 20-22, 1990. The Methodology of Decision Analysis (MDA) Project of the SDS Program focuses on a system-analytical approach to decision support and is devoted to developing methodology, software and applications of decision support systems concentrated primarily around interactive systems for data analysis, interpretation and multiobjective decisionmaking, including uncertainty analysis and group decision making situations in both their cooperative and noncooperative aspects. The objectives of the research on decision support systems (DSS) performed in cooperation with the MDA Project are to: compare various approaches to decision support systems; advance theory and methodology of decision support; convert existing theories and methodologies into usable (simple to use, user-friendly and robust) tools that could easily be used in solving real-life problems. A principal characteristic of decision support systems is that they must be tuned to specific decision situations, to complex real-life characteristics of every application. Even if the theory and methodology of decision support is quite advanced, every application might provide impulses for further theoretical and methodological advances. Therefore the principle underlying this project is that theoretical and methodological research should be strongly connected to the implementation and applications of its results to sufficiently complicated, real-life examples. This approach results in obtaining really applicable working tools for decision support. The papers for this Proceedings have been selected according to the above summarized framework of the research activities. Therefore, the papers deal both with theoretical and methodological problems and with real-life applications

    The 10th Jubilee Conference of PhD Students in Computer Science

    Get PDF
    corecore