12 research outputs found

    Proposition et vérification formelle de protocoles de communications temps-réel pour les réseaux de capteurs sans fil

    Get PDF
    Les RCsF sont des réseaux ad hoc, sans fil, large échelle déployés pour mesurer des paramètres de l'environnement et remonter les informations à un ou plusieurs emplacements (nommés puits). Les éléments qui composent le réseau sont de petits équipements électroniques qui ont de faibles capacités en termes de mémoire et de calcul ; et fonctionnent sur batterie. Ces caractéristiques font que les protocoles développés, dans la littérature scientifique de ces dernières années, visent principalement à auto-organiser le réseau et à réduire la consommation d'énergie. Avec l'apparition d'applications critiques pour les réseaux de capteurs sans fil, de nouveau besoins émergent, comme le respect de bornes temporelles et de fiabilité. En effet, les applications critiques sont des applications dont dépendent des vies humaines ou l'environnement, un mauvais fonctionnement peut donc avoir des conséquences catastrophiques. Nous nous intéressons spécifiquement aux applications de détection d'événements et à la remontée d'alarmes (détection de feu de forêt, d'intrusion, etc), ces applications ont des contraintes temporelles strictes. D'une part, dans la littérature, on trouve peu de protocoles qui permettent d'assurer des délais de bout en bout bornés. Parmi les propositions, on trouve des protocoles qui permettent effectivement de respecter des contraintes temporelles mais qui ne prennent pas en compte les spécificités des RCsF (énergie, large échelle, etc). D'autres propositions prennent en compte ces aspects, mais ne permettent pas de garantir des bornes temporelles. D'autre part, les applications critiques nécessitent un niveau de confiance très élevé, dans ce contexte les tests et simulations ne suffisent pas, il faut être capable de fournir des preuves formelles du respect des spécifications. A notre connaissance cet aspect est très peu étudié pour les RcsF. Nos contributions sont donc de deux types : * Nous proposons un protocole de remontée d'alarmes, en temps borné, X-layer (MAC/routage, nommé RTXP) basé sur un système de coordonnées virtuelles originales permettant de discriminer le 2-voisinage. L'exploitation de ces coordonnées permet d'introduire du déterminisme et de construire un gradient visant à contraindre le nombre maximum de sauts depuis toute source vers le puits. Nous proposons par ailleurs un mécanisme d'agrégation temps-réel des alarmes remontées pour lutter contre les tempêtes de détection qui entraînent congestion et collision, et donc limitent la fiabilité du système. * Nous proposons une méthodologie de vérification formelle basée sur les techniques de Model Checking. Cette méthodologie se déroule en trois points, qui visent à modéliser de manière efficace la nature diffusante des réseaux sans fil, vérifier les RCsF en prenant en compte la non-fiabilité du lien radio et permettre le passage à l'échelle de la vérification en mixant Network Calculus et Model Checking. Nous appliquons ensuite cette méthodologie pour vérifier RTXP.Wireless Sensor Networks (WSNs) are ad hoc wireless large scale networks deployed in order to monitor physical parameters of the environment and report the measurements to one or more nodes of the network (called sinks). The small electronic devices which compose the network have low computing and memory capacities and run on batteries, researches in this field have thus focused mostly on self-organization and energy consumption reduction aspects. Nevertheless, critical applications for WSNs are emerging and require more than those aspects, they have real-time and reliability requirements. Critical applications are applications on which depend human lives and the environment, a failure of a critical application can thus have dramatic consequences. We are especially interested in anomaly detection applications (forest fire detection, landslide detection, intrusion detection, etc), which require bounded end to end delays and high delivery ratio. Few WSNs protocols of the literature allow to bound end to end delays. Among the proposed solutions, some allow to effectively bound the end to end delays, but do not take into account the characteristics of WSNs (limited energy, large scale, etc). Others, take into account those aspects, but do not give strict guaranties on the end to end delays. Moreover, critical applications require a very high confidence level, simulations and tests are not sufficient in this context, formal proofs of compliance with the specifications of the application have to be provided. The application of formal methods to WSNs is still an open problem. Our contributions are thus twofold : * We propose a real-time cross-layer protocol for WSNs (named RTXP) based on a virtual coordinate system which allows to discriminate nodes in a 2-hop neighborhood. Thanks to these coordinates it is possible to introduce determinism in the accesses to the medium and to bound the hop-count, this allows to bound the end to end delay. Besides, we propose a real-time aggregation scheme to mitigate the alarm storm problem which causes collisions and congestion and thus limit the network lifetime. * We propose a formal verification methodology based on the Model Checking technique. This methodology is composed of three elements, (1) an efficient modeling of the broadcast nature of wireless networks, (2) a verification technique which takes into account the unreliability of the wireless link and (3) a verification technique which mixes Network Calculus and Model Checking in order to be both scalable and exhaustive. We apply this methodology in order to formally verify our proposition, RTXP.VILLEURBANNE-DOC'INSA-Bib. elec. (692669901) / SudocSudocFranceF

    Optimization of medical image steganography using n-decomposition genetic algorithm

    Get PDF
    Protecting patients' confidential information is a critical concern in medical image steganography. The Least Significant Bits (LSB) technique has been widely used for secure communication. However, it is susceptible to imperceptibility and security risks due to the direct manipulation of pixels, and ASCII patterns present limitations. Consequently, sensitive medical information is subject to loss or alteration. Despite attempts to optimize LSB, these issues persist due to (1) the formulation of the optimization suffering from non-valid implicit constraints, causing inflexibility in reaching optimal embedding, (2) lacking convergence in the searching process, where the message length significantly affects the size of the solution space, and (3) issues of application customizability where different data require more flexibility in controlling the embedding process. To overcome these limitations, this study proposes a technique known as an n-decomposition genetic algorithm. This algorithm uses a variable-length search to identify the best location to embed the secret message by incorporating constraints to avoid local minimum traps. The methodology consists of five main phases: (1) initial investigation, (2) formulating an embedding scheme, (3) constructing a decomposition scheme, (4) integrating the schemes' design into the proposed technique, and (5) evaluating the proposed technique's performance based on parameters using medical datasets from kaggle.com. The proposed technique showed resistance to statistical analysis evaluated using Reversible Statistical (RS) analysis and histogram. It also demonstrated its superiority in imperceptibility and security measured by MSE and PSNR to Chest and Retina datasets (0.0557, 0.0550) and (60.6696, 60.7287), respectively. Still, compared to the results obtained by the proposed technique, the benchmark outperforms the Brain dataset due to the homogeneous nature of the images and the extensive black background. This research has contributed to genetic-based decomposition in medical image steganography and provides a technique that offers improved security without compromising efficiency and convergence. However, further validation is required to determine its effectiveness in real-world applications

    Enhancing Real-time Embedded Image Processing Robustness on Reconfigurable Devices for Critical Applications

    Get PDF
    Nowadays, image processing is increasingly used in several application fields, such as biomedical, aerospace, or automotive. Within these fields, image processing is used to serve both non-critical and critical tasks. As example, in automotive, cameras are becoming key sensors in increasing car safety, driving assistance and driving comfort. They have been employed for infotainment (non-critical), as well as for some driver assistance tasks (critical), such as Forward Collision Avoidance, Intelligent Speed Control, or Pedestrian Detection. The complexity of these algorithms brings a challenge in real-time image processing systems, requiring high computing capacity, usually not available in processors for embedded systems. Hardware acceleration is therefore crucial, and devices such as Field Programmable Gate Arrays (FPGAs) best fit the growing demand of computational capabilities. These devices can assist embedded processors by significantly speeding-up computationally intensive software algorithms. Moreover, critical applications introduce strict requirements not only from the real-time constraints, but also from the device reliability and algorithm robustness points of view. Technology scaling is highlighting reliability problems related to aging phenomena, and to the increasing sensitivity of digital devices to external radiation events that can cause transient or even permanent faults. These faults can lead to wrong information processed or, in the worst case, to a dangerous system failure. In this context, the reconfigurable nature of FPGA devices can be exploited to increase the system reliability and robustness by leveraging Dynamic Partial Reconfiguration features. The research work presented in this thesis focuses on the development of techniques for implementing efficient and robust real-time embedded image processing hardware accelerators and systems for mission-critical applications. Three main challenges have been faced and will be discussed, along with proposed solutions, throughout the thesis: (i) achieving real-time performances, (ii) enhancing algorithm robustness, and (iii) increasing overall system's dependability. In order to ensure real-time performances, efficient FPGA-based hardware accelerators implementing selected image processing algorithms have been developed. Functionalities offered by the target technology, and algorithm's characteristics have been constantly taken into account while designing such accelerators, in order to efficiently tailor algorithm's operations to available hardware resources. On the other hand, the key idea for increasing image processing algorithms' robustness is to introduce self-adaptivity features at algorithm level, in order to maintain constant, or improve, the quality of results for a wide range of input conditions, that are not always fully predictable at design-time (e.g., noise level variations). This has been accomplished by measuring at run-time some characteristics of the input images, and then tuning the algorithm parameters based on such estimations. Dynamic reconfiguration features of modern reconfigurable FPGA have been extensively exploited in order to integrate run-time adaptivity into the designed hardware accelerators. Tools and methodologies have been also developed in order to increase the overall system dependability during reconfiguration processes, thus providing safe run-time adaptation mechanisms. In addition, taking into account the target technology and the environments in which the developed hardware accelerators and systems may be employed, dependability issues have been analyzed, leading to the development of a platform for quickly assessing the reliability and characterizing the behavior of hardware accelerators implemented on reconfigurable FPGAs when they are affected by such faults

    Research Evaluation 2000-2010:Department of Mathematical Sciences

    Get PDF

    Machine Learning And Image Processing For Noise Removal And Robust Edge Detection In The Presence Of Mixed Noise

    Get PDF
    The central goal of this dissertation is to design and model a smoothing filter based on the random single and mixed noise distribution that would attenuate the effect of noise while preserving edge details. Only then could robust, integrated and resilient edge detection methods be deployed to overcome the ubiquitous presence of random noise in images. Random noise effects are modeled as those that could emanate from impulse noise, Gaussian noise and speckle noise. In the first step, evaluation of methods is performed based on an exhaustive review on the different types of denoising methods which focus on impulse noise, Gaussian noise and their related denoising filters. These include spatial filters (linear, non-linear and a combination of them), transform domain filters, neural network-based filters, numerical-based filters, fuzzy based filters, morphological filters, statistical filters, and supervised learning-based filters. In the second step, switching adaptive median and fixed weighted mean filter (SAMFWMF) which is a combination of linear and non-linear filters, is introduced in order to detect and remove impulse noise. Then, a robust edge detection method is applied which relies on an integrated process including non-maximum suppression, maximum sequence, thresholding and morphological operations. The results are obtained on MRI and natural images. In the third step, a combination of transform domain-based filter which is a combination of dual tree – complex wavelet transform (DT-CWT) and total variation, is introduced in order to detect and remove Gaussian noise as well as mixed Gaussian and Speckle noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on medical ultrasound and natural images. In the fourth step, a smoothing filter, which is a feed-forward convolutional network (CNN) is introduced to assume a deep architecture, and supported through a specific learning algorithm, l2 loss function minimization, a regularization method, and batch normalization all integrated in order to detect and remove impulse noise as well as mixed impulse and Gaussian noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on natural images for both specific and non-specific noise-level

    Learning Effective Embeddings for Dynamic Graphs and Quantifying Graph Embedding Interpretability

    Get PDF
    Graph representation learning has been a very active research area in recent years. The goal of graph representation learning is to generate representation vectors that accurately capture the structure and features of large graphs. This is especially important because the quality of the graph representation vectors will affect the performance of these vectors in downstream tasks such as node classification and link prediction. Many techniques have been proposed for generating effective graph representation vectors. These methods can be applied to both static and dynamic graphs. A static graph is a single fixed graph, while a dynamic graph evolves over time, and its nodes and edges can be added or deleted from the graph. We surveyed the graph embedding methods for both static and dynamic graphs. The majority of the existing graph embedding methods are developed for static graphs. Therefore, since most real-world graphs are dynamic, developing novel graph embedding methods suitable for evolving graphs is essential. This dissertation proposes three dynamic graph embedding models. In previous dynamic methods, the inputs were mainly adjacency matrices of graphs which are not memory efficient and may not capture the neighbourhood structure in graphs effectively. Therefore, we developed Dynnode2vec based on random walks using the static model Node2vec. Dynnode2vec generates node embeddings in each snapshot by initializing the current model with previous embedding vectors and training the model using a set of random walks obtained for nodes in the snapshot. Our second model, LSTM-Node2vec, is also based on random walks. This method leverages the LSTM model to capture the long-range dependencies between nodes in combination with Node2vec to generate node embeddings. Finally, inspired by the importance of substructures in the graphs, our third model TGR-Clique generates node embeddings by considering the effects of neighbours of a node in the maximal cliques containing the node. Experiments on real-world datasets demonstrate the effectiveness of our proposed methods in comparison to the state-of-the-art models. In addition, motivated by the lack of proper measures for quantifying and comparing graph embeddings interpretability, we proposed two interpretability measures for graph embeddings using the centrality properties of graphs

    Survey of Template-Based Code Generation

    Full text link
    L'automatisation de la génération des artefacts textuels à partir des modèles est une étape critique dans l'Ingénierie Dirigée par les Modèles (IDM). C'est une transformation de modèles utile pour générer le code source, sérialiser les modèles dans de stockages persistents, générer les rapports ou encore la documentation. Parmi les différents paradigmes de transformation de modèle-au-texte, la génération de code basée sur les templates (TBCG) est la plus utilisée en IDM. La TBCG est une technique de génération qui produit du code à partir des spécifications de haut niveau appelées templates. Compte tenu de la diversité des outils et des approches, il est nécessaire de classifier et de comparer les techniques de TBCG existantes afin d'apporter un soutien approprié aux développeurs. L'objectif de ce mémoire est de mieux comprendre les caractéristiques des techniques de TBCG, identifier les tendances dans la recherche, et éxaminer l'importance du rôle de l'IDM par rapport à cette approche. J'évalue également l'expressivité, la performance et la mise à l'échelle des outils associés selon une série de modèles. Je propose une étude systématique de cartographie de la littérature qui décrit une intéressante vue d'ensemble de la TBCG et une étude comparitive des outils de la TBCG pour mieux guider les dévloppeurs dans leur choix. Cette étude montre que les outils basés sur les modèles offrent plus d'expressivité tandis que les outils basés sur le code sont les plus performants. Enfin, Xtend2 offre le meilleur compromis entre l'expressivité et la performance.A critical step in model-driven engineering (MDE) is the automatic synthesis of a textual artifact from models. This is a very useful model transformation to generate application code, to serialize the model in persistent storage, generate documentation or reports. Among the various model-to-text transformation paradigms, Template-Based Code Generation (TBCG) is the most popular in MDE. TBCG is a synthesis technique that produces code from high-level specifications, called templates. It is a popular technique in MDE given that they both emphasize abstraction and automation. Given the diversity of tools and approaches, it is necessary to classify and compare existing TBCG techniques to provide appropriate support to developers. The goal of this thesis is to better understand the characteristics of TBCG techniques, identify research trends, and assess the importance of the role of MDE in this code synthesis approach. We also evaluate the expressiveness, performance and scalability of the associated tools based on a range of models that implement critical patterns. To this end, we conduct a systematic mapping study of the literature that paints an interesting overview of TBCG and a comparative study on TBCG tools to better guide developers in their choices. This study shows that model-based tools offer more expressiveness whereas code-based tools performed much faster. Xtend2 offers the best compromise between the expressiveness and the performance

    MS FT-2-2 7 Orthogonal polynomials and quadrature: Theory, computation, and applications

    Get PDF
    Quadrature rules find many applications in science and engineering. Their analysis is a classical area of applied mathematics and continues to attract considerable attention. This seminar brings together speakers with expertise in a large variety of quadrature rules. It is the aim of the seminar to provide an overview of recent developments in the analysis of quadrature rules. The computation of error estimates and novel applications also are described
    corecore