33 research outputs found
A generic framework for context-sensitive analysis of modular programs
Context-sensitive analysis provides information which is potentially more accurate than that provided by context-free analysis. Such information can then be applied in order to validate/debug the program and/or to specialize the program obtaining important improvements. Unfortunately, context-sensitive analysis of modular programs poses important theoretical and practical problems. One solution, used in several proposals, is to resort to context-free analysis. Other proposals do address
context-sensitive analysis, but are only applicable when the description domain used satisfies rather restrictive properties. In this paper, we argĂŒe that a general framework for context-sensitive analysis of modular programs, Le., one that allows using all the domains which have proved useful in practice in the non-modular setting, is indeed feasible and very useful. Driven by our experience in the design and implementation of analysis and specialization techniques in the context of CiaoPP, the Ciao
system preprocessor, in this paper we discuss a number of design goals for context-sensitive analysis of modular programs as well as the problems which arise in trying to meet these goals. We also provide a high-level description of a framework for analysis of modular programs which does
substantially meet these objectives. This framework is generic in that it can be instantiated in different ways in order to adapt to different contexts. Finally, the behavior of the different instantiations w.r.t. the design goals that motivate our work is also discussed
A domesticated harbinger transposase forms a complex with HDA6 and promotes histone H3 deacetylation at genes but not TEs in Arabidopsis
Discovering Application-Level Insider Attacks Using Symbolic Execution
Coordinated Science Laboratory was formerly known as Control Systems LaboratoryNational Science Foundation / 727 NSF CNS 05-5166
A move in the security measurement stalemate:Elo-style ratings to quantify vulnerability
Model based analysis of insider threats
In order to detect malicious insider attacks it is
important to model and analyse infrastructures and policies
of organisations and the insiders acting within them. We extend formal approaches that allow modelling such scenarios
by quantitative aspects to enable a precise analysis of security designs. Our framework enables evaluating the risks of an insider attack to happen quantitatively. The framework first identifies an insiderâs intention to perform an inside attack, using Bayesian networks, and in a second phase computes the probability of success for an inside attack by this actor, using probabilistic model checking. We provide prototype tool support using Matlab
for Bayesian networks and PRISM for the analysis of Markov
decision processes, and validate the framework with case studies
Privacy Penetration Testing: How to Establish Trust in Your Cloud Provider
© Springer Science+Business Media B.V. 2012. In the age of cloud computing, IT infrastructure becomes virtualised and takes the form of services. This virtualisation results in an increasing de-perimeterisation, where the location of data and computation is irrelevant from a user's point of view. This irrelevance means that private and institutional users no longer have a concept of where their data is stored, and whether they can trust in cloud providers to protect their data. In this chapter, we investigate methods for increasing customersâ trust into cloud providers, and suggest a public penetration-testing agency as an essential component in a trustworthy cloud infrastructure
Anomalous flexor digitorum superficialis muscle belly presenting as a mass within the palm
Survey on hearing aid outcome in Switzerland: associations with type of fitting (bilateral/unilateral), level of hearing aid signal processing, and hearing loss.
The present investigation further analysed results of a previously reported survey with a large sample of hearing aid owners (Bertoli et al, 2009) to determine the individual and technological factors related to hearing aid outcome. In particular the associations of hearing loss, level of signal processing, and fitting type (bilateral versus unilateral fitting) with hearing aid use, satisfaction with and management of the aid were evaluated. A sub-group with symmetrical hearing loss was analysed (n = 6027). Regular use was more frequent in bilateral users and in owners of devices with more complex signal processing, but the strongest determinant of regular use was severity of hearing loss. Satisfaction was higher in the group wearing simple devices, while fitting type and degree of hearing loss had no influence on satisfaction rates. Moderate and severe hearing loss was associated more frequently with poor management of the aid than mild hearing loss. It was concluded that bilateral amplification and advanced signal processing features may contribute to successful hearing aid fitting, but the resulting differences must be considered to be relatively small
Modular Termination Analysis of Java Bytecode and Its Application to phoneME Core Libraries
Abstract. Termination analysis has received considerable attention, traditionally in the context of declarative programming and, recently, also for imperative and Object Oriented (OO) languages. In fact, there exist termination analyzers for OO which are capable of proving termination of medium size applications by means of global analysis, in the sense that all the code used by such applications has to be proved terminating. However, global analysis has important weaknesses, such as its high memory requirements and its lack of efficiency, since often some parts of the code have to be analyzed over and over again, libraries being a paramount example of this. In this work we present how to extend the termination analysis in the COSTA system in order to make it modular by allowing separate analysis of individual methods. The proposed approach has been implemented. We report on its application to the termination analysis of the core libraries of the phoneME project, a well-known open source implementation of Java Micro Edition (JavaME), a realistic but reduced version of Java to be run on mobile phones and PDAs. We argue that such experiments are relevant, since handling libraries is known to be one of the most relevant open problems in analysis and verification of real-life applications. Our experimental results show that our proposal dramatically reduces the amount of code which needs to be handled in each analysis and that this allows proving termination of a good number of methods for which global analysis is unfeasible.