10 research outputs found

    Proceedings of The Rust-Edu Workshop

    Get PDF
    The 2022 Rust-Edu Workshop was an experiment. We wanted to gather together as many thought leaders we could attract in the area of Rust education, with an emphasis on academic-facing ideas. We hoped that productive discussions and future collaborations would result. Given the quick preparation and the difficulties of an international remote event, I am very happy to report a grand success. We had more than 27 participants from timezones around the globe. We had eight talks, four refereed papers and statements from 15 participants. Everyone seemed to have a good time, and I can say that I learned a ton. These proceedings are loosely organized: they represent a mere compilation of the excellent submitted work. I hope you’ll find this material as pleasant and useful as I have. Bart Massey 30 August 202

    Security-Pattern Recognition and Validation

    Get PDF
    The increasing and diverse number of technologies that are connected to the Internet, such as distributed enterprise systems or small electronic devices like smartphones, brings the topic IT security to the foreground. We interact daily with these technologies and spend much trust on a well-established software development process. However, security vulnerabilities appear in software on all kinds of PC(-like) platforms, and more and more vulnerabilities are published, which compromise systems and their users. Thus, software has also to be modified due to changing requirements, bugs, and security flaws and software engineers must more and more face security issues during the software design; especially maintenance programmers must deal with such use cases after a software has been released. In the domain of software development, design patterns have been proposed as the best-known solutions for recurring problems in software design. Analogously, security patterns are best practices aiming at ensuring security. This thesis develops a deeper understanding of the nature of security patterns. It focuses on their validation and detection regarding the support of reviews and maintenance activities. The landscape of security patterns is diverse. Thus, published security patterns are collected and organized to identify software-related security patterns. The description of the selected software-security patterns is assessed, and they are compared against the common design patterns described by Gamma et al. to identify differences and issues that may influence the detection of security patterns. Based on these insights and a manual detection approach, we illustrate an automatic detection method for security patterns. The approach is implemented in a tool and evaluated in a case study with 25 real-world Android applications from Google Play

    Deep and near space tracking stations in support of lunar and planetary exploration missions

    Get PDF
    The aim of this dissertation is to describe the methodologies required to design, operate, and validate the performance of ground stations dedicated to near and deep space tracking, as well as the models developed to process the signals acquired, from raw data to the output parameters of the orbit determination of spacecraft. This work is framed in the context of lunar and planetary exploration missions by addressing the challenges in receiving and processing radiometric data for radio science investigations and navigation purposes. These challenges include the designing of an appropriate back-end to read, convert and store the antenna voltages, the definition of appropriate methodologies for pre-processing, calibration, and estimation of radiometric data for the extraction of information on the spacecraft state, and the definition and integration of accurate models of the spacecraft dynamics to evaluate the goodness of the recorded signals. Additionally, the experimental design of acquisition strategies to perform direct comparison between ground stations is described and discussed. In particular, the evaluation of the differential performance between stations requires the designing of a dedicated tracking campaign to maximize the overlap of the recorded datasets at the receivers, making it possible to correlate the received signals and isolate the contribution of the ground segment to the noise in the single link. Finally, in support of the methodologies and models presented, results from the validation and design work performed on the Deep Space Network (DSN) affiliated nodes DSS-69 and DSS-17 will also be reported

    Prototyping parallel functional intermediate languages

    Get PDF
    Non-strict higher-order functional programming languages are elegant, concise, mathematically sound and contain few environment-specific features, making them obvious candidates for harnessing high-performance architectures. The validity of this approach has been established by a number of experimental compilers. However, while there have been a number of important theoretical developments in the field of parallel functional programming, implementations have been slow to materialise. The myriad design choices and demands of specific architectures lead to protracted development times. Furthermore, the resulting systems tend to be monolithic entities, and are difficult to extend and test, ultimatly discouraging experimentation. The traditional solution to this problem is the use of a rapid prototyping framework. However, as each existing systems tends to prefer one specific platform and a particular way of expressing parallelism (including implicit specification) it is difficult to envisage a general purpose framework. Fortunately, most of these systems have at least one point of commonality: the use of an intermediate form. Typically, these abstract representations explicitly identify all parallel components but without the background noise of syntactic and (potentially arbitrary) implementation details. To this end, this thesis outlines a framework for rapidly prototyping such intermediate languages. Based on the traditional three-phase compiler model, the design process is driven by the development of various semantic descriptions of the language. Executable versions of the specifications help to both debug and informally validate these models. A number of case studies, covering the spectrum of modern implementations, demonstrate the utility of the framework

    Time-triggered Runtime Verification of Real-time Embedded Systems

    Get PDF
    In safety-critical real-time embedded systems, correctness is of primary concern, as even small transient errors may lead to catastrophic consequences. Due to the limitations of well-established methods such as verification and testing, recently runtime verification has emerged as a complementary approach, where a monitor inspects the system to evaluate the specifications at run time. The goal of runtime verification is to monitor the behavior of a system to check its conformance to a set of desirable logical properties. The literature of runtime verification mostly focuses on event-triggered solutions, where a monitor is invoked when a significant event occurs (e.g., change in the value of some variable used by the properties). At invocation, the monitor evaluates the set of properties of the system that are affected by the occurrence of the event. This type of monitor invocation has two main runtime characteristics: (1) jittery runtime overhead, and (2) unpredictable monitor invocations. These characteristics result in transient overload situations and over-provisioning of resources in real-time embedded systems and hence, may result in catastrophic outcomes in safety-critical systems. To circumvent the aforementioned defects in runtime verification, this dissertation introduces a novel time-triggered monitoring approach, where the monitor takes samples from the system with a constant frequency, in order to analyze the system's health. We describe the formal semantics of time-triggered monitoring and discuss how to optimize the sampling period using minimum auxiliary memory and path prediction techniques. Experiments on real-time embedded systems show that our approach introduces bounded overhead, predictable monitoring, less over-provisioning, and effectively reduces the involvement of the monitor at run time by using negligible auxiliary memory. We further advance our time-triggered monitor to component-based multi-core embedded systems by establishing an optimization technique that provides the invocation frequency of the monitors and the mapping of components to cores to minimize monitoring overhead. Lastly, we present RiTHM, a fully automated and open source tool which provides time-triggered runtime verification specifically for real-time embedded systems developed in C

    Energy: A special bibliography with indexes, April 1974

    Get PDF
    This literature survey of special energy and energy related documents lists 1708 reports, articles, and other documents introduced into the NASA scientific and technical information system between January 1, 1968, and December 31, 1973. Citations from International Aerospace Abstracts (IAA) and Scientific and Technical Aerospace Reports (STAR) are grouped according to the following subject categories: energy systems; solar energy; primary energy sources; secondary energy sources; energy conversion; energy transport, transmission, and distribution; and energy storage. The index section includes the subject, personal author, corporate source, contract, report, and accession indexes

    Specification and Verification of Shared-Memory Concurrent Programs

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Handbook of Lexical Functional Grammar

    Get PDF
    Lexical Functional Grammar (LFG) is a nontransformational theory of linguistic structure, first developed in the 1970s by Joan Bresnan and Ronald M. Kaplan, which assumes that language is best described and modeled by parallel structures representing different facets of linguistic organization and information, related by means of functional correspondences. This volume has five parts. Part I, Overview and Introduction, provides an introduction to core syntactic concepts and representations. Part II, Grammatical Phenomena, reviews LFG work on a range of grammatical phenomena or constructions. Part III, Grammatical modules and interfaces, provides an overview of LFG work on semantics, argument structure, prosody, information structure, and morphology. Part IV, Linguistic disciplines, reviews LFG work in the disciplines of historical linguistics, learnability, psycholinguistics, and second language learning. Part V, Formal and computational issues and applications, provides an overview of computational and formal properties of the theory, implementations, and computational work on parsing, translation, grammar induction, and treebanks. Part VI, Language families and regions, reviews LFG work on languages spoken in particular geographical areas or in particular language families. The final section, Comparing LFG with other linguistic theories, discusses LFG work in relation to other theoretical approaches

    Modélisation graphique probabiliste pour la maîtrise des risques, la fiabilité et la synthèse de lois de commande des systèmes complexes

    Get PDF
    Mes travaux de recherche sont menés au Centre de Recherche en Automatique de Nancy (CRAN), dans le département Ingénierie des Systèmes Eco-Techniques (ISET) sous la responsabilité de B. Iung et de A. Thomas et le département Contrôle - Identification - Diagnostic (CID) sous la responsabilité de D. Maquin et de G. Millerioux.L’objectif principal de mes recherches est de formaliser des méthodes de construction de modèles probabilistes représentant les bons fonctionnements et les dysfonctionnements d’un système industriel. Ces modèles ont pour but de permettre l’évaluation des objectifs de fonctionnement du système (exigences opérationnelles, performances) et les conséquences en termes de fiabilité et de maîtrise des risques (exigences de sûreté). Ceci nécessite de modéliser les impacts de l’environnement sur le système et sur ses performances, mais aussi l’impact des stratégies de commande et des stratégies de maintenance sur l’état de santé du système.Pour plus de détails.A travers les différents travaux de thèses et collaborations, j’ai exploité différents formalismes de modélisation probabilistes. Les apports majeurs de nos contributions se déclinent en 3 points :• La modélisation des conséquences fonctionnelles des défaillances, structurée à partir des connaissances métiers. Nous avons développés les principes de modélisation par Réseau Bayésien (RB) permettant de relier la fiabilité et les effets des états de dégradation des composants à l’architecture fonctionnelle du système. Les composants et les modes de défaillances sont alors décrits naturellement par des variables multi-états ce qui est difficile à modéliser par les méthodes classiques de sûreté de fonctionnement. Nous proposons de représenter le modèle selon différents niveaux d'abstraction en relation avec l’analyse fonctionnelle. La modélisation par un modèle probabiliste relationnel (PRM) permet de capitaliser la connaissance par la création des classes génériques instanciées sur un système avec le principe des composants pris sur étagère.• Une modélisation dynamique de la fiabilité des systèmes pris dans leur environnement. Nous avons contribué lors de notre collaboration avec Bayesia à la modélisation de la fiabilité des systèmes par Réseau Bayésien Dynamique (RBD). Un RBD permet, grâce à la factorisation de la loi jointe, une complexité inférieure à une Chaîne de Markov ainsi qu’un paramétrage plus facile. La collaboration avec Bayesia a permis l’intégration dans Bayesialab (outil de modélisation) de ces extensions et notamment l’utilisation de paramètres variables dans le temps élargissant la modélisation des RBD à des processus Markoviens non homogènes.• La synthèse de la loi de commande pour l’optimisation de la fiabilité du système. Nous travaillons sur l’intégration de la fiabilité dans les objectifs de commande des systèmes sous contrainte de défaillances ou de défauts. Nous posons aujourd’hui le problème dans un contexte général de commande. Nous proposons une structuration du système de commande intégrant des fonctions d’optimisation et des fonctions d’évaluation de grandeurs probabilistes liées à la fiabilité du système. Nos travaux récents sont focalisés sur l’intégration, dans la boucle d’optimisation de la commande, des facteurs issues d’une analyse de sensibilité de la fiabilité du système par rapport aux composants
    corecore