146,843 research outputs found
Formation of iddingsite veins in the martian crust by centripetal replacement of olivine: evidence from the nakhlite meteorite Lafayette
The Lafayette meteorite is an olivine clinopyroxenite that crystallized on Mars ∼1300 million years ago within a lava flow or shallow sill. Liquid water entered this igneous rock ∼700 million years later to produce a suite of secondary minerals, collectively called ‘iddingsite’, that occur as veins within grains of augite and olivine. The deuterium/hydrogen ratio of water within these secondary minerals shows that the aqueous solutions were sourced from one or more near-surface reservoirs. Several petrographically distinct types of veins can be recognised by differences in their width, shape, and crystallographic orientation. Augite and olivine both contain veins of a very fine grained hydrous Fe- and Mg-rich silicate that are ∼1-2 micrometres in width and lack any preferred crystallographic orientation. These narrow veins formed by cementation of pore spaces that had been opened by fracturing and probably in response to shock. The subset of olivine-hosted veins whose axes lie parallel to (001) have serrated walls, and formed by widening of the narrow veins by interface coupled dissolution-precipitation. Widening started by replacement of the walls of the narrow precursor veins by Fe-Mg silicate, and a crystallographic control on the trajectory of the dissolution-precipitation front created micrometre-scale {111} serrations. The walls of many of the finely serrated veins were subsequently replaced by siderite, and the solutions responsible for carbonation of olivine also partially recrystallized the Fe-Mg silicate. Smectite was the last mineral to form and grew by replacement of siderite. This mineralization sequence shows that Lafayette was exposed to two discrete pulses of aqueous solutions, the first of which formed the Fe-Mg silicate, and the second mediated replacement of vein walls by siderite and smectite. The similarity in size, shape and crystallographic orientation of iddingsite veins in the Lafayette meteorite and in terrestrial basalts demonstrates a common microstructural control on water-mineral interaction between Mars and Earth, and indicates that prior shock deformation was not a prerequisite for aqueous alteration of the martian crust
Improving Strategies via SMT Solving
We consider the problem of computing numerical invariants of programs by
abstract interpretation. Our method eschews two traditional sources of
imprecision: (i) the use of widening operators for enforcing convergence within
a finite number of iterations (ii) the use of merge operations (often, convex
hulls) at the merge points of the control flow graph. It instead computes the
least inductive invariant expressible in the domain at a restricted set of
program points, and analyzes the rest of the code en bloc. We emphasize that we
compute this inductive invariant precisely. For that we extend the strategy
improvement algorithm of [Gawlitza and Seidl, 2007]. If we applied their method
directly, we would have to solve an exponentially sized system of abstract
semantic equations, resulting in memory exhaustion. Instead, we keep the system
implicit and discover strategy improvements using SAT modulo real linear
arithmetic (SMT). For evaluating strategies we use linear programming. Our
algorithm has low polynomial space complexity and performs for contrived
examples in the worst case exponentially many strategy improvement steps; this
is unsurprising, since we show that the associated abstract reachability
problem is Pi-p-2-complete
Invariant Generation through Strategy Iteration in Succinctly Represented Control Flow Graphs
We consider the problem of computing numerical invariants of programs, for
instance bounds on the values of numerical program variables. More
specifically, we study the problem of performing static analysis by abstract
interpretation using template linear constraint domains. Such invariants can be
obtained by Kleene iterations that are, in order to guarantee termination,
accelerated by widening operators. In many cases, however, applying this form
of extrapolation leads to invariants that are weaker than the strongest
inductive invariant that can be expressed within the abstract domain in use.
Another well-known source of imprecision of traditional abstract interpretation
techniques stems from their use of join operators at merge nodes in the control
flow graph. The mentioned weaknesses may prevent these methods from proving
safety properties. The technique we develop in this article addresses both of
these issues: contrary to Kleene iterations accelerated by widening operators,
it is guaranteed to yield the strongest inductive invariant that can be
expressed within the template linear constraint domain in use. It also eschews
join operators by distinguishing all paths of loop-free code segments. Formally
speaking, our technique computes the least fixpoint within a given template
linear constraint domain of a transition relation that is succinctly expressed
as an existentially quantified linear real arithmetic formula. In contrast to
previously published techniques that rely on quantifier elimination, our
algorithm is proved to have optimal complexity: we prove that the decision
problem associated with our fixpoint problem is in the second level of the
polynomial-time hierarchy.Comment: 35 pages, conference version published at ESOP 2011, this version is
a CoRR version of our submission to Logical Methods in Computer Scienc
Succinct Representations for Abstract Interpretation
Abstract interpretation techniques can be made more precise by distinguishing
paths inside loops, at the expense of possibly exponential complexity.
SMT-solving techniques and sparse representations of paths and sets of paths
avoid this pitfall. We improve previously proposed techniques for guided static
analysis and the generation of disjunctive invariants by combining them with
techniques for succinct representations of paths and symbolic representations
for transitions based on static single assignment. Because of the
non-monotonicity of the results of abstract interpretation with widening
operators, it is difficult to conclude that some abstraction is more precise
than another based on theoretical local precision results. We thus conducted
extensive comparisons between our new techniques and previous ones, on a
variety of open-source packages.Comment: Static analysis symposium (SAS), Deauville : France (2012
Optimizing Abstract Abstract Machines
The technique of abstracting abstract machines (AAM) provides a systematic
approach for deriving computable approximations of evaluators that are easily
proved sound. This article contributes a complementary step-by-step process for
subsequently going from a naive analyzer derived under the AAM approach, to an
efficient and correct implementation. The end result of the process is a two to
three order-of-magnitude improvement over the systematically derived analyzer,
making it competitive with hand-optimized implementations that compute
fundamentally less precise results.Comment: Proceedings of the International Conference on Functional Programming
2013 (ICFP 2013). Boston, Massachusetts. September, 201
Cellular automata approach to three-phase traffic theory
The cellular automata (CA) approach to traffic modeling is extended to allow
for spatially homogeneous steady state solutions that cover a two dimensional
region in the flow-density plane. Hence these models fulfill a basic postulate
of a three-phase traffic theory proposed by Kerner. This is achieved by a
synchronization distance, within which a vehicle always tries to adjust its
speed to the one of the vehicle in front. In the CA models presented, the
modelling of the free and safe speeds, the slow-to-start rules as well as some
contributions to noise are based on the ideas of the Nagel-Schreckenberg type
modelling. It is shown that the proposed CA models can be very transparent and
still reproduce the two main types of congested patterns (the general pattern
and the synchronized flow pattern) as well as their dependence on the flows
near an on-ramp, in qualitative agreement with the recently developed continuum
version of the three-phase traffic theory [B. S. Kerner and S. L. Klenov. 2002.
J. Phys. A: Math. Gen. 35, L31]. These features are qualitatively different
than in previously considered CA traffic models. The probability of the
breakdown phenomenon (i.e., of the phase transition from free flow to
synchronized flow) as function of the flow rate to the on-ramp and of the flow
rate on the road upstream of the on-ramp is investigated. The capacity drops at
the on-ramp which occur due to the formation of different congested patterns
are calculated.Comment: 55 pages, 24 figure
Pushdown Control-Flow Analysis of Higher-Order Programs
Context-free approaches to static analysis gain precision over classical
approaches by perfectly matching returns to call sites---a property that
eliminates spurious interprocedural paths. Vardoulakis and Shivers's recent
formulation of CFA2 showed that it is possible (if expensive) to apply
context-free methods to higher-order languages and gain the same boost in
precision achieved over first-order programs.
To this young body of work on context-free analysis of higher-order programs,
we contribute a pushdown control-flow analysis framework, which we derive as an
abstract interpretation of a CESK machine with an unbounded stack. One
instantiation of this framework marks the first polyvariant pushdown analysis
of higher-order programs; another marks the first polynomial-time analysis. In
the end, we arrive at a framework for control-flow analysis that can
efficiently compute pushdown generalizations of classical control-flow
analyses.Comment: The 2010 Workshop on Scheme and Functional Programmin
Accelerated Data-Flow Analysis
Acceleration in symbolic verification consists in computing the exact effect
of some control-flow loops in order to speed up the iterative fix-point
computation of reachable states. Even if no termination guarantee is provided
in theory, successful results were obtained in practice by different tools
implementing this framework. In this paper, the acceleration framework is
extended to data-flow analysis. Compared to a classical
widening/narrowing-based abstract interpretation, the loss of precision is
controlled here by the choice of the abstract domain and does not depend on the
way the abstract value is computed. Our approach is geared towards precision,
but we don't loose efficiency on the way. Indeed, we provide a cubic-time
acceleration-based algorithm for solving interval constraints with full
multiplication
- …