434 research outputs found

    Detection and Exploitation of Information Flow Leaks

    Get PDF
    This thesis contributes to the field of language-based information flow analysis with a focus on detection and exploitation of information flow leaks in programs. To achieve this goal, this thesis presents a number of precise semi-automatic approaches that allow one to detect, exploit and judge the severity of information flow leaks in programs. The first part of the thesis develops an approach to detect and demonstrate information flow leaks in a program. This approach analyses a given program statically using symbolic execution and self-composition with the aim to generate so-called insecurity formulas whose satisfying models (obtained by SMT solvers) give rise to pairs of initial states that demonstrate insecure information flows. Based on these models, small unit test cases, so-called leak demonstrators, are created that check for the detected information flow leaks and fail if these exist. The developed approach is able to deal with unbounded loops and recursive method invocation by using program specifications like loop invariants or method contracts. This allows the approach to be fully precise (if needed) but also to abstract and allow for false positives in exchange for a higher degree of automation and simpler specifications. The approach supports several information flow security policies, namely, noninterference, delimited information release, and information erasure. The second part of the thesis builds upon the previous approach that allows the user to judge the severity of an information flow leak by exploiting the detected leaks in order to infer the secret information. This is achieved by utilizing a hybrid analysis which conducts an adaptive attack by performing a series of experiments. An experiment constitutes a concrete program run which serves to accumulate the knowledge about the secret. Each experiment is carried out with optimal low inputs deduced from the prior distribution and the knowledge of secret so that the potential leakage is maximized. We propose a novel approach to quantify information leakages as explicit functions of low inputs using symbolic execution and parametric model counting. Depending on the chosen security metric, general nonlinear optimization tools or Max-SMT solvers are used to find optimal low inputs, i.e., inputs that cause the program to leak a maximum of information. For the purpose of evaluation, both approaches have been fully implemented in the tool KEG, which is based on the state-of-the-art program verification system KeY. KEG supports a rich subset of sequential Java programs and generates executable JUnit tests as leak demonstrators. For the secret inference, KEG produces executable Java programs and runs them to perform the adaptive attack. The thesis discusses the planning, execution, and results of the evaluation. The evaluation has been performed on a collection of micro-benchmarks as well as two case studies, which are taken from the literature. The evaluation using the micro-benchmarks shows that KEG detects successfully all information flow leaks and is able to generate correct demonstrators in case the supplied specifications are correct and strong enough. With respect to secret inference, it shows that the approach presented in this thesis (which computes optimal low inputs) helps an attacker to learn the secret much more efficiently compared to approaches using arbitrary low inputs. KEG has also been evaluated in two case studies. The first case study is performed on an e-voting software which has been extracted in a simplified form from a real-world e-voting system. This case study focuses on the leak detection and demonstrator generation approach. The e-voting case study shows that KEG is able to deal with relatively complicated programs that include unbounded loops, objects, and arrays. Moreover, the case study demonstrates that KEG can be integrated with a specification generation tool to obtain both precision and full automation. The second case study is conducted on a PIN integrity checking program, adapted from a real-world ATM PIN verifying system. This case study mainly demonstrates the secret inference feature of KEG. It shows that KEG can help an attacker to learn the secret more efficiently given a good enough assumption about the prior distribution of secret

    Proceedings of International Workshop "Global Computing: Programming Environments, Languages, Security and Analysis of Systems"

    Get PDF
    According to the IST/ FET proactive initiative on GLOBAL COMPUTING, the goal is to obtain techniques (models, frameworks, methods, algorithms) for constructing systems that are flexible, dependable, secure, robust and efficient. The dominant concerns are not those of representing and manipulating data efficiently but rather those of handling the co-ordination and interaction, security, reliability, robustness, failure modes, and control of risk of the entities in the system and the overall design, description and performance of the system itself. Completely different paradigms of computer science may have to be developed to tackle these issues effectively. The research should concentrate on systems having the following characteristics: • The systems are composed of autonomous computational entities where activity is not centrally controlled, either because global control is impossible or impractical, or because the entities are created or controlled by different owners. • The computational entities are mobile, due to the movement of the physical platforms or by movement of the entity from one platform to another. • The configuration varies over time. For instance, the system is open to the introduction of new computational entities and likewise their deletion. The behaviour of the entities may vary over time. • The systems operate with incomplete information about the environment. For instance, information becomes rapidly out of date and mobility requires information about the environment to be discovered. The ultimate goal of the research action is to provide a solid scientific foundation for the design of such systems, and to lay the groundwork for achieving effective principles for building and analysing such systems. This workshop covers the aspects related to languages and programming environments as well as analysis of systems and resources involving 9 projects (AGILE , DART, DEGAS , MIKADO, MRG, MYTHS, PEPITO, PROFUNDIS, SECURE) out of the 13 founded under the initiative. After an year from the start of the projects, the goal of the workshop is to fix the state of the art on the topics covered by the two clusters related to programming environments and analysis of systems as well as to devise strategies and new ideas to profitably continue the research effort towards the overall objective of the initiative. We acknowledge the Dipartimento di Informatica and Tlc of the University of Trento, the Comune di Rovereto, the project DEGAS for partially funding the event and the Events and Meetings Office of the University of Trento for the valuable collaboration

    Integration of Static and Dynamic Analysis Techniques for Checking Noninterference

    Get PDF
    In this article, we present an overview of recent combinations of deductive program verification and automatic test generation on the one hand and static analysis on the other hand, with the goal of checking noninterference. Noninterference is the non-functional property that certain confidential information cannot leak to certain public output, i.e., the confidentiality of that information is always preserved. We define the noninterference properties that are checked along with the individual approaches that we use in different combinations. In one use case, our framework for checking noninterference employs deductive verification to automatically generate tests for noninterference violations with an improved test coverage. In another use case, the framework provides two combinations of deductive verification with static analysis based on system dependence graphs to prove noninterference, thereby reducing the effort for deductive verification

    Program test objectives milestone 3

    Get PDF
    The following conclusions have been developed relative to propulsion system technology adequacy for efficient development and operation of recoverable and expendable launch vehicles (RLV and ELV) and the benefits which the integrated propulsion technology demonstrator will provide for enhancing technology: (1) Technology improvements relative to propulsion system design and operation can reduce program cost. Many features or improvement needs to enhance operability, reduce cost, and improve payload are identified. (2) The Integrated Propulsion Technology Demonstrator (IPTD) Program provides a means of resolving the majority of issues associated with improvement needs. (3) The IPTD will evaluate complex integration of vehicle and facility functions in fluid management and propulsion control systems, and provides an environment for validating improved mechanical and electrical components. (4) The IPTD provides a mechanism for investigating operational issues focusing on reducing manpower and time to perform various functions at the launch site. These efforts include model development, collection of data to validate subject models, and ultimate development of complex time line models. (5) The IPTD provides an engine test bed for tri/bi-propellant engine development firings which is representative of the actual vehicle environment. (6) The IPTD provides for only a limited multiengine configuration integration environment for RLV. Multiengine efforts may be simulated for a number of subsystems and a number of subsystems are relatively independent of the multiengine influences

    Design and management of image processing pipelines within CPS : Acquired experience towards the end of the FitOptiVis ECSEL Project

    Get PDF
    Cyber-Physical Systems (CPSs) are dynamic and reactive systems interacting with processes, environment and, sometimes, humans. They are often distributed with sensors and actuators, characterized for being smart, adaptive, predictive and react in real-time. Indeed, image- and video-processing pipelines are a prime source for environmental information for systems allowing them to take better decisions according to what they see. Therefore, in FitOptiVis, we are developing novel methods and tools to integrate complex image- and video-processing pipelines. FitOptiVis aims to deliver a reference architecture for describing and optimizing quality and resource management for imaging and video pipelines in CPSs both at design- and run-time. The architecture is concretized in low-power, high-performance, smart components, and in methods and tools for combined design-time and run-time multi-objective optimization and adaptation within system and environment constraints.Peer reviewe

    NASA SBIR abstracts of 1991 phase 1 projects

    Get PDF
    The objectives of 301 projects placed under contract by the Small Business Innovation Research (SBIR) program of the National Aeronautics and Space Administration (NASA) are described. These projects were selected competitively from among proposals submitted to NASA in response to the 1991 SBIR Program Solicitation. The basic document consists of edited, non-proprietary abstracts of the winning proposals submitted by small businesses. The abstracts are presented under the 15 technical topics within which Phase 1 proposals were solicited. Each project was assigned a sequential identifying number from 001 to 301, in order of its appearance in the body of the report. Appendixes to provide additional information about the SBIR program and permit cross-reference of the 1991 Phase 1 projects by company name, location by state, principal investigator, NASA Field Center responsible for management of each project, and NASA contract number are included

    Language Design for Reactive Systems: On Modal Models, Time, and Object Orientation in Lingua Franca and SCCharts

    Get PDF
    Reactive systems play a crucial role in the embedded domain. They continuously interact with their environment, handle concurrent operations, and are commonly expected to provide deterministic behavior to enable application in safety-critical systems. In this context, language design is a key aspect, since carefully tailored language constructs can aid in addressing the challenges faced in this domain, as illustrated by the various concurrency models that prevent the known pitfalls of regular threads. Today, many languages exist in this domain and often provide unique characteristics that make them specifically fit for certain use cases. This thesis evolves around two distinctive languages: the actor-oriented polyglot coordination language Lingua Franca and the synchronous statecharts dialect SCCharts. While they take different approaches in providing reactive modeling capabilities, they share clear similarities in their semantics and complement each other in design principles. This thesis analyzes and compares key design aspects in the context of these two languages. For three particularly relevant concepts, it provides and evaluates lean and seamless language extensions that are carefully aligned with the fundamental principles of the underlying language. Specifically, Lingua Franca is extended toward coordinating modal behavior, while SCCharts receives a timed automaton notation with an efficient execution model using dynamic ticks and an extension toward the object-oriented modeling paradigm

    ECLSS advanced automation preliminary requirements

    Get PDF
    A description of the total Environmental Control and Life Support System (ECLSS) is presented. The description of the hardware is given in a top down format, the lowest level of which is a functional description of each candidate implementation. For each candidate implementation, both its advantages and disadvantages are presented. From this knowledge, it was suggested where expert systems could be used in the diagnosis and control of specific portions of the ECLSS. A process to determine if expert systems are applicable and how to select the expert system is also presented. The consideration of possible problems or inconsistencies in the knowledge or workings in the subsystems is described

    Center for Space Transportation and Applied Research Fifth Annual Technical Symposium Proceedings

    Get PDF
    This Fifth Annual Technical Symposium, sponsored by the UT-Calspan Center for Space Transportation and Applied Research (CSTAR), is organized to provide an overview of the technical accomplishments of the Center's five Research and Technology focus areas during the past year. These areas include chemical propulsion, electric propulsion, commerical space transportation, computational methods, and laser materials processing. Papers in the area of artificial intelligence/expert systems are also presented
    corecore