8,622 research outputs found

    Contribuciones a la Seguridad del Aprendizaje Automático

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Ciencias Matemáticas, leída el 05-11-2020Machine learning (ML) applications have experienced an unprecedented growth over the last two decades. However, the ever increasing adoption of ML methodologies has revealed important security issues. Among these, vulnerabilities to adversarial examples, data instances targeted at fooling ML algorithms, are especially important. Examples abound. For instance, it is relatively easy to fool a spam detector simply misspelling spam words. Obfuscation of malware code can make it seem legitimate. Simply adding stickers to a stop sign could make an autonomous vehicle classify it as a merge sign. Consequences could be catastrophic. Indeed, ML is designed to work in stationary and benign environments. However, in certain scenarios, the presence of adversaries that actively manipulate input datato fool ML systems to attain benefits break such stationarity requirements. Training and operation conditions are not identical anymore. This creates a whole new class of security vulnerabilities that ML systems may face and a new desirable property: adversarial robustness. If we are to trust operations based on ML outputs, it becomes essential that learning systems are robust to such adversarial manipulations...Las aplicaciones del aprendizaje automático o machine learning (ML) han experimentado un crecimiento sin precedentes en las últimas dos décadas. Sin embargo, la adopción cada vez mayor de metodologías de ML ha revelado importantes problemas de seguridad. Entre estos, destacan las vulnerabilidades a ejemplos adversarios, es decir; instancias de datos destinadas a engañar a los algoritmos de ML. Los ejemplos abundan: es relativamente fácil engañar a un detector de spam simplemente escribiendo mal algunas palabras características de los correos basura. La ofuscación de código malicioso (malware) puede hacer que parezca legítimo. Agregando unos parches a una señal de stop, se podría provocar que un vehículo autónomo la reconociese como una señal de dirección obligatoria. Cómo puede imaginar el lector, las consecuencias de estas vulnerabilidades pueden llegar a ser catastróficas. Y es que el machine learning está diseñado para trabajar en entornos estacionarios y benignos. Sin embargo, en ciertos escenarios, la presencia de adversarios que manipulan activamente los datos de entrada para engañar a los sistemas de ML(logrando así beneficios), rompen tales requisitos de estacionariedad. Las condiciones de entrenamiento y operación de los algoritmos ya no son idénticas, quebrándose una de las hipótesis fundamentales del ML. Esto crea una clase completamente nueva de vulnerabilidades que los sistemas basados en el aprendizaje automático deben enfrentar y una nueva propiedad deseable: la robustez adversaria. Si debemos confiaren las operaciones basadas en resultados del ML, es esencial que los sistemas de aprendizaje sean robustos a tales manipulaciones adversarias...Fac. de Ciencias MatemáticasTRUEunpu

    Project scheduling under undertainty – survey and research potentials.

    Get PDF
    The vast majority of the research efforts in project scheduling assume complete information about the scheduling problem to be solved and a static deterministic environment within which the pre-computed baseline schedule will be executed. However, in the real world, project activities are subject to considerable uncertainty, that is gradually resolved during project execution. In this survey we review the fundamental approaches for scheduling under uncertainty: reactive scheduling, stochastic project scheduling, stochastic GERT network scheduling, fuzzy project scheduling, robust (proactive) scheduling and sensitivity analysis. We discuss the potentials of these approaches for scheduling projects under uncertainty.Management; Project management; Robustness; Scheduling; Stability;

    Learning and Reacting with Inaccurate Prediction: Applications to Autonomous Excavation

    Get PDF
    Motivated by autonomous excavation, this work investigates solutions to a class of problem where disturbance prediction is critical to overcoming poor performance of a feedback controller, but where the disturbance prediction is intrinsically inaccurate. Poor feedback controller performance is related to a fundamental control problem: there is only a limited amount of disturbance rejection that feedback compensation can provide. It is known, however, that predictive action can improve the disturbance rejection of a control system beyond the limitations of feedback. While prediction is desirable, the problem in excavation is that disturbance predictions are prone to error due to the variability and complexity of soil-tool interaction forces. This work proposes the use of iterative learning control to map the repetitive components of excavation forces into feedforward commands. Although feedforward action shows useful to improve excavation performance, the non-repetitive nature of soil-tool interaction forces is a source of inaccurate predictions. To explicitly address the use of imperfect predictive compensation, a disturbance observer is used to estimate the prediction error. To quantify inaccuracy in prediction, a feedforward model of excavation disturbances is interpreted as a communication channel that transmits corrupted disturbance previews, for which metrics based on the sensitivity function exist. During field trials the proposed method demonstrated the ability to iteratively achieve a desired dig geometry, independent of the initial feasibility of the excavation passes in relation to actuator saturation. Predictive commands adapted to different soil conditions and passes were repeated autonomously until a pre-specified finish quality of the trench was achieved. Evidence of improvement in disturbance rejection is presented as a comparison of sensitivity functions of systems with and without the use of predictive disturbance compensation
    corecore