6,523 research outputs found
Formal Framework for Property-driven Obfuscations
We study the existence and the characterization of function transformers that minimally or maximally modify a function in order to reveal or conceal a certain property. Based on this general formal framework we develop a strategy for the design of the maximal obfuscating transformation that conceals a given property while revealing the desired observational behaviou
Towards a Unifying Framework for Tuning Analysis Precision by Program Transformation
Static and dynamic program analyses attempt to extract useful information on program’s behaviours. Static analysis uses an abstract model of programs to reason on their runtime behaviour without actually running them, while dynamic analysis reasons on a test set of real program executions. For this reason, the precision of static analysis is limited by the presence of false positives (executions allowed by the abstract model that cannot happen at runtime), while the precision of dynamic analysis is limited by the presence of false negatives (real executions that are not in the test set). Researchers have developed many analysis techniques and tools in the attempt to increase the precision of program verification. Software protection is an interesting scenario where programs need to be protected from adversaries that use program analysis to understand their inner working and then exploit this knowledge to perform some illicit actions. Program analysis plays a dual role in program verification and software protection: in program verification we want the analysis to be as precise as possible, while in software protection we want to degrade the results of analysis as much as possible. Indeed, in software protection researchers usually recur to a special class of program transformations, called code obfuscation, to modify a program in order to make it more difficult to analyse while preserving its intended functionality. In this setting, it is interesting to study how program transformations that preserve the intended behaviour of programs can affect the precision of both static and dynamic analysis. While some works have been done in order to formalise the efficiency of code obfuscation in degrading static analysis and in the possibility of transforming programs in order to avoid or increase false positives, less attention has been posed to formalise the relation between program transformations and false negatives in dynamic analysis. In this work we are setting the scene for a formal investigation of the syntactic and semantic program features that affect the presence of false negatives in dynamic analysis. We believe that this understanding would be useful for improving the precision of existing dynamic analysis tools and in the design of program transformations that complicate the dynamic analysis
A New Paradigm in Split Manufacturing: Lock the FEOL, Unlock at the BEOL
Split manufacturing was introduced as an effective countermeasure against
hardware-level threats such as IP piracy, overbuilding, and insertion of
hardware Trojans. Nevertheless, the security promise of split manufacturing has
been challenged by various attacks, which exploit the well-known working
principles of physical design tools to infer the missing BEOL interconnects. In
this work, we advocate a new paradigm to enhance the security for split
manufacturing. Based on Kerckhoff's principle, we protect the FEOL layout in a
formal and secure manner, by embedding keys. These keys are purposefully
implemented and routed through the BEOL in such a way that they become
indecipherable to the state-of-the-art FEOL-centric attacks. We provide our
secure physical design flow to the community. We also define the security of
split manufacturing formally and provide the associated proofs. At the same
time, our technique is competitive with current schemes in terms of layout
overhead, especially for practical, large-scale designs (ITC'99 benchmarks).Comment: DATE 2019 (https://www.date-conference.com/conference/session/4.5
Gaming security by obscurity
Shannon sought security against the attacker with unlimited computational
powers: *if an information source conveys some information, then Shannon's
attacker will surely extract that information*. Diffie and Hellman refined
Shannon's attacker model by taking into account the fact that the real
attackers are computationally limited. This idea became one of the greatest new
paradigms in computer science, and led to modern cryptography.
Shannon also sought security against the attacker with unlimited logical and
observational powers, expressed through the maxim that "the enemy knows the
system". This view is still endorsed in cryptography. The popular formulation,
going back to Kerckhoffs, is that "there is no security by obscurity", meaning
that the algorithms cannot be kept obscured from the attacker, and that
security should only rely upon the secret keys. In fact, modern cryptography
goes even further than Shannon or Kerckhoffs in tacitly assuming that *if there
is an algorithm that can break the system, then the attacker will surely find
that algorithm*. The attacker is not viewed as an omnipotent computer any more,
but he is still construed as an omnipotent programmer.
So the Diffie-Hellman step from unlimited to limited computational powers has
not been extended into a step from unlimited to limited logical or programming
powers. Is the assumption that all feasible algorithms will eventually be
discovered and implemented really different from the assumption that everything
that is computable will eventually be computed? The present paper explores some
ways to refine the current models of the attacker, and of the defender, by
taking into account their limited logical and programming powers. If the
adaptive attacker actively queries the system to seek out its vulnerabilities,
can the system gain some security by actively learning attacker's methods, and
adapting to them?Comment: 15 pages, 9 figures, 2 tables; final version appeared in the
Proceedings of New Security Paradigms Workshop 2011 (ACM 2011); typos
correcte
Analyzing program dependences for malware detection.
Metamorphic malware continuously modify their code, while preserving their functionality, in order to foil misuse detection. The key for defeating metamorphism relies in a semantic characterization of the embedding of the malware into the target program. Indeed, a behavioral model of program infection that does not relay on syntactic program features should be able to defeat metamorphism. Moreover, a general model of infection should be able to express dependences and interactions between the malicious codeand the target program. ANI is a general theory for the analysis of dependences of data in a program. We propose an high order theory for ANI, later called HOANI, that allows to study program dependencies. Our idea is then to formalize and study the malware detection problem in terms of HOANI
Privacy-enhancing Aggregation of Internet of Things Data via Sensors Grouping
Big data collection practices using Internet of Things (IoT) pervasive
technologies are often privacy-intrusive and result in surveillance, profiling,
and discriminatory actions over citizens that in turn undermine the
participation of citizens to the development of sustainable smart cities.
Nevertheless, real-time data analytics and aggregate information from IoT
devices open up tremendous opportunities for managing smart city
infrastructures. The privacy-enhancing aggregation of distributed sensor data,
such as residential energy consumption or traffic information, is the research
focus of this paper. Citizens have the option to choose their privacy level by
reducing the quality of the shared data at a cost of a lower accuracy in data
analytics services. A baseline scenario is considered in which IoT sensor data
are shared directly with an untrustworthy central aggregator. A grouping
mechanism is introduced that improves privacy by sharing data aggregated first
at a group level compared as opposed to sharing data directly to the central
aggregator. Group-level aggregation obfuscates sensor data of individuals, in a
similar fashion as differential privacy and homomorphic encryption schemes,
thus inference of privacy-sensitive information from single sensors becomes
computationally harder compared to the baseline scenario. The proposed system
is evaluated using real-world data from two smart city pilot projects. Privacy
under grouping increases, while preserving the accuracy of the baseline
scenario. Intra-group influences of privacy by one group member on the other
ones are measured and fairness on privacy is found to be maximized between
group members with similar privacy choices. Several grouping strategies are
compared. Grouping by proximity of privacy choices provides the highest privacy
gains. The implications of the strategy on the design of incentives mechanisms
are discussed
Formal framework for reasoning about the precision of dynamic analysis
Dynamic program analysis is extremely successful both in code debugging and in malicious code attacks. Fuzzing, concolic, and monkey testing are instances of the more general problem of analysing programs by dynamically executing their code with selected inputs. While static program analysis has a beautiful and well established theoretical foundation in abstract interpretation, dynamic analysis still lacks such a foundation. In this paper, we introduce a formal model for understanding the notion of precision in dynamic program analysis. It is known that in sound-by-construction static program analysis the precision amounts to completeness. In dynamic analysis, which is inherently unsound, precision boils down to a notion of coverage of execution traces with respect to what the observer (attacker or debugger) can effectively observe about the computation. We introduce a topological characterisation of the notion of coverage relatively to a given (fixed) observation for dynamic program analysis and we show how this coverage can be changed by semantic preserving code transformations. Once again, as well as in the case of static program analysis and abstract interpretation, also for dynamic analysis we can morph the precision of the analysis by transforming the code. In this context, we validate our model on well established code obfuscation and watermarking techniques. We confirm the efficiency of existing methods for preventing control-flow-graph extraction and data exploit by dynamic analysis, including a validation of the potency of fully homomorphic data encodings in code obfuscation
- …