215 research outputs found

    A Sliding Mode Multimodel Control for a Sensorless Photovoltaic System

    Full text link
    In this work we will talk about a new control test using the sliding mode control with a nonlinear sliding mode observer, which are very solicited in tracking problems, for a sensorless photovoltaic panel. In this case, the panel system will has as a set point the sun position at every second during the day for a period of five years; then the tracker, using sliding mode multimodel controller and a sliding mode observer, will track these positions to make the sunrays orthogonal to the photovoltaic cell that produces more energy. After sunset, the tracker goes back to the initial position (which of sunrise). Experimental measurements show that this autonomic dual axis Sun Tracker increases the power production by over 40%

    Testing Feedforward Neural Networks Training Programs

    Full text link
    Nowadays, we are witnessing an increasing effort to improve the performance and trustworthiness of Deep Neural Networks (DNNs), with the aim to enable their adoption in safety critical systems such as self-driving cars. Multiple testing techniques are proposed to generate test cases that can expose inconsistencies in the behavior of DNN models. These techniques assume implicitly that the training program is bug-free and appropriately configured. However, satisfying this assumption for a novel problem requires significant engineering work to prepare the data, design the DNN, implement the training program, and tune the hyperparameters in order to produce the model for which current automated test data generators search for corner-case behaviors. All these model training steps can be error-prone. Therefore, it is crucial to detect and correct errors throughout all the engineering steps of DNN-based software systems and not only on the resulting DNN model. In this paper, we gather a catalog of training issues and based on their symptoms and their effects on the behavior of the training program, we propose practical verification routines to detect the aforementioned issues, automatically, by continuously validating that some important properties of the learning dynamics hold during the training. Then, we design, TheDeepChecker, an end-to-end property-based debugging approach for DNN training programs. We assess the effectiveness of TheDeepChecker on synthetic and real-world buggy DL programs and compare it with Amazon SageMaker Debugger (SMD). Results show that TheDeepChecker's on-execution validation of DNN-based program's properties succeeds in revealing several coding bugs and system misconfigurations, early on and at a low cost. Moreover, TheDeepChecker outperforms the SMD's offline rules verification on training logs in terms of detection accuracy and DL bugs coverage

    TFCheck : A TensorFlow Library for Detecting Training Issues in Neural Network Programs

    Full text link
    The increasing inclusion of Machine Learning (ML) models in safety critical systems like autonomous cars have led to the development of multiple model-based ML testing techniques. One common denominator of these testing techniques is their assumption that training programs are adequate and bug-free. These techniques only focus on assessing the performance of the constructed model using manually labeled data or automatically generated data. However, their assumptions about the training program are not always true as training programs can contain inconsistencies and bugs. In this paper, we examine training issues in ML programs and propose a catalog of verification routines that can be used to detect the identified issues, automatically. We implemented the routines in a Tensorflow-based library named TFCheck. Using TFCheck, practitioners can detect the aforementioned issues automatically. To assess the effectiveness of TFCheck, we conducted a case study with real-world, mutants, and synthetic training programs. Results show that TFCheck can successfully detect training issues in ML code implementations

    DeepEvolution: A Search-Based Testing Approach for Deep Neural Networks

    Full text link
    The increasing inclusion of Deep Learning (DL) models in safety-critical systems such as autonomous vehicles have led to the development of multiple model-based DL testing techniques. One common denominator of these testing techniques is the automated generation of test cases, e.g., new inputs transformed from the original training data with the aim to optimize some test adequacy criteria. So far, the effectiveness of these approaches has been hindered by their reliance on random fuzzing or transformations that do not always produce test cases with a good diversity. To overcome these limitations, we propose, DeepEvolution, a novel search-based approach for testing DL models that relies on metaheuristics to ensure a maximum diversity in generated test cases. We assess the effectiveness of DeepEvolution in testing computer-vision DL models and found that it significantly increases the neuronal coverage of generated test cases. Moreover, using DeepEvolution, we could successfully find several corner-case behaviors. Finally, DeepEvolution outperformed Tensorfuzz (a coverage-guided fuzzing tool developed at Google Brain) in detecting latent defects introduced during the quantization of the models. These results suggest that search-based approaches can help build effective testing tools for DL systems

    SKCS-A Separable Kernel Family with Compact Support to improve visual segmentation of handwritten data

    Get PDF
    Extraction of pertinent data from noisy gray level document images with various and complex backgrounds such as mail envelopes, bank checks, business forms, etc... remains a challenging problem in character recognition applications. It depends on the quality of the character segmentation process. Over the last few decades, mathematical tools have been developed for this purpose. Several authors show that the Gaussian kernel is unique and offers many beneficial properties. In their recent work Remaki and Cheriet proposed a new kernel family with compact supports (KCS) in scale space that achieved good performance in extracting data information with regard to the Gaussian kernel. In this paper, we focus in further improving the KCS efficiency by proposing a new separable version of kernel family namely (SKCS). This new kernel has also a compact support and preserves the most important properties of the Gaussian kernel in order to perform image segmentation efficiently and to make the recognizer task particularly easier. A practical comparison is established between results obtained by using the KCS and the SKCS operators. Our comparison is based on the information loss and the gain in time processing. Experiments, on real life data, for extracting handwritten data, from noisy gray level images, show promising performance of the SKCS kernel, especially in reducing drastically the processing time with regard to the KCS
    corecore