2,435 research outputs found

    The Transitivity of Trust Problem in the Interaction of Android Applications

    Full text link
    Mobile phones have developed into complex platforms with large numbers of installed applications and a wide range of sensitive data. Application security policies limit the permissions of each installed application. As applications may interact, restricting single applications may create a false sense of security for the end users while data may still leave the mobile phone through other applications. Instead, the information flow needs to be policed for the composite system of applications in a transparent and usable manner. In this paper, we propose to employ static analysis based on the software architecture and focused data flow analysis to scalably detect information flows between components. Specifically, we aim to reveal transitivity of trust problems in multi-component mobile platforms. We demonstrate the feasibility of our approach with Android applications, although the generalization of the analysis to similar composition-based architectures, such as Service-oriented Architecture, can also be explored in the future

    An Introduction to Slice-Based Cohesion and Coupling Metrics

    Get PDF
    This report provides an overview of slice-based software metrics. It brings together information about the development of the metrics from Weiser’s original idea that program slices may be used in the measurement of program complexity, with alternative slice-based measures proposed by other researchers. In particular, it details two aspects of slice-based metric calculation not covered elsewhere in the literature: output variables and worked examples of the calculations. First, output variables are explained, their use explored and standard reference terms and usage proposed. Calculating slice-based metrics requires a clear understanding of ‘output variables’ because they form the basis for extracting the program slices on which the calculations depend. This report includes a survey of the variation in the definition of output variables used by different research groups and suggests standard terms of reference for these variables. Our study identifies four elements which are combined in the definition of output variables. These are the function return value, modified global variables, modified reference parameters and variables printed or otherwise output by the module. Second, slice-based metric calculations are explained with the aid of worked examples, to assist newcomers to the field. Step-by-step calculations of slice-based cohesion and coupling metrics based on the vertices output by the static analysis tool CodeSurfer (R) are presented and compared with line-based calculations

    Extending C Global Surveyor

    Get PDF
    Software failure are noted for their blowing large sums of money and sometimes even human life, in particular in the area of safety critical mission software. The most well-known desaster happened in 1996 when Ariane 501 exploded shortly after launch. The least it did was to cost the European space program half a billion US$ due to an overflow in an arithmetic conversion. The Automated Software Engineering Group at the NASA Ames Research Center has developed C Global Surveyor (CGS), a static analysis tool based on abstract interpretation. It particularly concentrates on runtime errors that are hard to find during development such as out-of-bound array accesses, acesses to non-initialised variables, and de-references of null pointers. CGS proved to analyse large, pointer intensive and heavily multithreaded code (up to 280 KLoC) in a couple of hours with a constant precision of 80%. It is used to successfully analyse mission-critical flight software of NASA\u27s "Mars Path-Finder" (MPF) and Deep Space 1 (DS1) legacy as well as software of the Mars Exploration Rover (MER) mission (650 KLoC) and other JPL-based missions. However, the abstract interpretation techniques on which CGS is based, need to be augmented by complimentary program analysis techniques in order to enhance CGS and support the developer when analysing very large systems. As a first step, we included the construction of control flow graphs that represent the programs to be analysed. It is a first step towards the application of more advanced techniques such as program slicing

    05451 Abstracts Collection -- Beyond Program Slicing

    Get PDF
    From 06.11.05 to 11.11.05, the Dagstuhl Seminar 05451 ``Beyond Program Slicing\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    From Formal Semantics to Verified Slicing : A Modular Framework with Applications in Language Based Security

    Get PDF
    This book presents a modular framework for slicing in the proof assistant Isabelle/HOL which is based on abstract control flow graphs. Building on such abstract structures renders the correctness results language-independent. To prove that they hold for a specific language, it remains to instantiate the framework with this language, which requires a formal semantics of this language in Isabelle/HOL. We show that formal semantics even for sophisticated high-level languages are realizable

    Enabling Program Analysis Through Deterministic Replay and Optimistic Hybrid Analysis

    Full text link
    As software continues to evolve, software systems increase in complexity. With software systems composed of many distinct but interacting components, today’s system programmers, users, and administrators find themselves requiring automated ways to find, understand, and handle system mis-behavior. Recent information breaches such as the Equifax breach of 2017, and the Heartbleed vulnerability of 2014 show the need to understand and debug prior states of computer systems. In this thesis I focus on enabling practical entire-system retroactive analysis, allowing programmers, users, and system administrators to diagnose and understand the impact of these devastating mishaps. I focus primarly on two techniques. First, I discuss a novel deterministic record and replay system which enables fast, practical recollection of entire systems of computer state. Second, I discuss optimistic hybrid analysis, a novel optimization method capable of dramatically accelerating retroactive program analysis. Record and replay systems greatly aid in solving a variety of problems, such as fault tolerance, forensic analysis, and information providence. These solutions, however, assume ubiquitous recording of any application which may have a problem. Current record and replay systems are forced to trade-off between disk space and replay speed. This trade-off has historically made it impractical to both record and replay large histories of system level computation. I present Arnold, a novel record and replay system which efficiently records years of computation on a commodity hard-drive, and can efficiently replay any recorded information. Arnold combines caching with a unique process-group granularity of recording to produce both small, and quickly recalled recordings. My experiments show that under a desktop workload, Arnold could store 4 years of computation on a commodity 4TB hard drive. Dynamic analysis is used to retroactively identify and address many forms of system mis-behaviors including: programming errors, data-races, private information leakage, and memory errors. Unfortunately, the runtime overhead of dynamic analysis has precluded its adoption in many instances. I present a new dynamic analysis methodology called optimistic hybrid analysis (OHA). OHA uses knowledge of the past to predict program behaviors in the future. These predictions, or likely invariants are speculatively assumed true by a static analysis. This creates a static analysis which can be far more accurate than its traditional counterpart. Once this predicated static analysis is created, it is speculatively used to optimize a final dynamic analysis, creating a far more efficient dynamic analysis than otherwise possible. I demonstrate the effectiveness of OHA by creating an optimistic hybrid backward slicer, OptSlice, and optimistic data-race detector OptFT. OptSlice and OptFT are just as accurate as their traditional hybrid counterparts, but run on average 8.3x and 1.6x faster respectively. In this thesis I demonstrate that Arnold’s ability to record and replay entire computer systems, combined with optimistic hybrid analysis’s ability to quickly analyze prior computation, enable a practical and useful entire system retroactive analysis that has been previously unrealized.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/144052/1/ddevec_1.pd

    A Survey of Symbolic Execution Techniques

    Get PDF
    Many security and software testing applications require checking whether certain properties of a program hold for any possible usage scenario. For instance, a tool for identifying software vulnerabilities may need to rule out the existence of any backdoor to bypass a program's authentication. One approach would be to test the program using different, possibly random inputs. As the backdoor may only be hit for very specific program workloads, automated exploration of the space of possible inputs is of the essence. Symbolic execution provides an elegant solution to the problem, by systematically exploring many possible execution paths at the same time without necessarily requiring concrete inputs. Rather than taking on fully specified input values, the technique abstractly represents them as symbols, resorting to constraint solvers to construct actual instances that would cause property violations. Symbolic execution has been incubated in dozens of tools developed over the last four decades, leading to major practical breakthroughs in a number of prominent software reliability applications. The goal of this survey is to provide an overview of the main ideas, challenges, and solutions developed in the area, distilling them for a broad audience. The present survey has been accepted for publication at ACM Computing Surveys. If you are considering citing this survey, we would appreciate if you could use the following BibTeX entry: http://goo.gl/Hf5FvcComment: This is the authors pre-print copy. If you are considering citing this survey, we would appreciate if you could use the following BibTeX entry: http://goo.gl/Hf5Fv
    • …
    corecore