171,804 research outputs found
First Principles NMR Study of Fluorapatite under Pressure
NMR is the technique of election to probe the local properties of materials.
Herein we present the results of density functional theory (DFT) \textit{ab
initio} calculations of the NMR parameters for fluorapatite (FAp), a calcium
orthophosphate mineral belonging to the apatite family, by using the GIPAW
method [Pickard and Mauri, 2001]. Understanding the local effects of pressure
on apatites is particularly relevant because of their important role in many
solid state and biomedical applications. Apatites are open structures, which
can undergo complex anisotropic deformations, and the response of NMR can
elucidate the microscopic changes induced by an applied pressure. The computed
NMR parameters proved to be in good agreement with the available experimental
data. The structural evaluation of the material behavior under hydrostatic
pressure (from --5 to +100 kbar) indicated a shrinkage of the diameter of the
apatitic channel, and a strong correlation between NMR shielding and pressure,
proving the sensitivity of this technique to even small changes in the chemical
environment around the nuclei. This theoretical approach allows the exploration
of all the different nuclei composing the material, thus providing a very
useful guidance in the interpretation of experimental results, particularly
valuable for the more challenging nuclei such as Ca and O.Comment: 8 pages, 2 figures, 3 table
Automatic Verification of Parametric Specifications with Complex Topologies
The focus of this paper is on reducing the complexity in verification by exploiting modularity at various levels: in specification, in verification, and structurally. \begin{itemize} \item For specifications, we use the modular language CSP-OZ-DC, which allows us to decouple verification tasks concerning data from those concerning durations. \item At the verification level, we exploit modularity in theorem proving for rich data structures and use this for invariant checking. \item At the structural level, we analyze possibilities for modular verification of systems consisting of various components which interact. \end{itemize} We illustrate these ideas by automatically verifying safety properties of a case study from the European Train Control System standard, which extends previous examples by comprising a complex track topology with lists of track segments and trains with different routes
Consistency of random forests
Random forests are a learning algorithm proposed by Breiman [Mach. Learn. 45
(2001) 5--32] that combines several randomized decision trees and aggregates
their predictions by averaging. Despite its wide usage and outstanding
practical performance, little is known about the mathematical properties of the
procedure. This disparity between theory and practice originates in the
difficulty to simultaneously analyze both the randomization process and the
highly data-dependent tree structure. In the present paper, we take a step
forward in forest exploration by proving a consistency result for Breiman's
[Mach. Learn. 45 (2001) 5--32] original algorithm in the context of additive
regression models. Our analysis also sheds an interesting light on how random
forests can nicely adapt to sparsity. 1. Introduction. Random forests are an
ensemble learning method for classification and regression that constructs a
number of randomized decision trees during the training phase and predicts by
averaging the results. Since its publication in the seminal paper of Breiman
(2001), the procedure has become a major data analysis tool, that performs well
in practice in comparison with many standard methods. What has greatly
contributed to the popularity of forests is the fact that they can be applied
to a wide range of prediction problems and have few parameters to tune. Aside
from being simple to use, the method is generally recognized for its accuracy
and its ability to deal with small sample sizes, high-dimensional feature
spaces and complex data structures. The random forest methodology has been
successfully involved in many practical problems, including air quality
prediction (winning code of the EMC data science global hackathon in 2012, see
http://www.kaggle.com/c/dsg-hackathon), chemoinformatics [Svetnik et al.
(2003)], ecology [Prasad, Iverson and Liaw (2006), Cutler et al. (2007)], 3
Automatic Amortized Resource Analysis with Regular Recursive Types
The goal of automatic resource bound analysis is to statically infer symbolic
bounds on the resource consumption of the evaluation of a program. A
longstanding challenge for automatic resource analysis is the inference of
bounds that are functions of complex custom data structures. This article
builds on type-based automatic amortized resource analysis (AARA) to address
this challenge. AARA is based on the potential method of amortized analysis and
reduces bound inference to standard type inference with additional linear
constraint solving, even when deriving non-linear bounds. A key component of
AARA is resource functions that generate the space of possible bounds for
values of a given type while enjoying necessary closure properties.
Existing work on AARA defined such functions for many data structures such as
lists of lists but the question of whether such functions exist for arbitrary
data structures remained open. This work answers this questions positively by
uniformly constructing resource polynomials for algebraic data structures
defined by regular recursive types. These functions are a generalization of all
previously proposed polynomial resource functions and can be seen as a general
notion of polynomials for values of a given recursive type. A resource type
system for FPC, a core language with recursive types, demonstrates how resource
polynomials can be integrated with AARA while preserving all benefits of past
techniques. The article also proposes the use of new techniques useful for
stating the rules of this type system and proving it sound. First, multivariate
potential annotations are stated in terms of free semimodules, substantially
abstracting details of the presentation of annotations and the proofs of their
properties. Second, a logical relation giving semantic meaning to resource
types enables a proof of soundness by a single induction on typing derivations.Comment: 15 pages, 5 figures; to be published in LICS'2
Efficient Data Structures for Automated Theorem Proving in Expressive Higher-Order Logics
Church's Simple Theory of Types (STT), also referred to as classical higher-order logik, is an elegant and expressive formal system built on top of the simply typed λ-calculus. Its mechanisms of explicit binding and quantification over arbitrary sets and functions allow the representation of complex mathematical concepts and formulae in a concise and unambiguous manner. Higher-order automated theorem proving (ATP) has recently made major progress and several sophisticated ATP systems for higher-order logic have been developed, including Satallax, Osabelle/HOL and LEO-II. Still, higher-order theorem proving is not as mature as its first-order counterpart, and robust implementation techniques for efficient data structures are scarce.
In this thesis, a higher-order term representation based upon the polymorphically typed λ-calculus is presented. This term representation employs spine notation, explicit substitutions and perfect term sharing for efficient term traversal, fast β-normalization and reuse of already constructed terms, respectively. An evaluation of the term representation is performed on the basis of a heterogeneous benchmark set. It shows that while the presented term data structure performs quite well in general, the normalization results indicate that a context dependent choice of reduction strategies is beneficial.
A term indexing data structure for fast term retrieval based on various low-level criteria is presented and discussed. It supports symbol-based term retrieval, indexing of terms via structural properties, and subterm indexing
On Automated Lemma Generation for Separation Logic with Inductive Definitions
Separation Logic with inductive definitions is a well-known approach for
deductive verification of programs that manipulate dynamic data structures.
Deciding verification conditions in this context is usually based on
user-provided lemmas relating the inductive definitions. We propose a novel
approach for generating these lemmas automatically which is based on simple
syntactic criteria and deterministic strategies for applying them. Our approach
focuses on iterative programs, although it can be applied to recursive programs
as well, and specifications that describe not only the shape of the data
structures, but also their content or their size. Empirically, we find that our
approach is powerful enough to deal with sophisticated benchmarks, e.g.,
iterative procedures for searching, inserting, or deleting elements in sorted
lists, binary search tress, red-black trees, and AVL trees, in a very efficient
way
- …