13,297 research outputs found
Recommended from our members
Incremental closure for systems of two variables per inequality
Subclasses of linear inequalities where each inequality has at most two vari- ables are popular in abstract interpretation and model checking, because they strike a balance between what can be described and what can be efficiently computed. This paper focuses on the TVPI class of inequalities, for which each coefficient of each two variable inequality is unrestricted. An implied TVPI in- equality can be generated from a pair of TVPI inequalities by eliminating a given common variable (echoing resolution on clauses). This operation, called result , can be applied to derive TVPI inequalities which are entailed (implied) by a given TVPI system. The key operation on TVPI is calculating closure: satisfiability can be observed from a closed system and a closed system also simplifies the calculation of other operations. A closed system can be derived by repeatedly applying the result operator. The process of adding a single TVPI inequality to an already closed input TVPI system and then finding the closure of this augmented system is called incremental closure. This too can be calcu- lated by the repeated application of the result operator. This paper studies the calculus defined by result , the structure of result derivations, and how deriva- tions can be combined and controlled. A series of lemmata on derivations are presented that, collectively, provide a pathway for synthesising an algorithm for incremental closure. The complexity of the incremental closure algorithm is analysed and found to be O (( n 2 + m 2 )lg( m )), where n is the number of variables and m the number of inequalities of the input TVPI system
Experimentally measurement and analysis of stress under foundation slab
Understanding of a load redistribution into subsoil below building foundation is an important knowledge for reliable design and its economy too. The article presents the results of a physical model of a foundation slab and its interaction with the subsoil. The interactions were investigated comprehensively by monitoring the developments of stress in the subsoil and foundation slab settlement during its loading. The load acting on the foundation was applied by strutting the hydraulic press against heavy steel frame which was established by the Department of Building Structures, Faculty of Civil Engineering of VSB -TU Ostrava for this purpose. The preparatory phase of the present experiment involved the homogenization of soil during which trio pressure cells in three horizons were gradually fitted. The quality of homogenization was checked on an ongoing basis through field tests: dynamic penetration load test, dynamic plate load test and seismic measurement of foundation slab response. Finally, the homogenized soil was subjected to mechanical analysis to determine the strength and deformation parameters for basic Mohr-Coulomb constitutive model.Web of Science133513512
Implicit yield function formulation for granular and rock-like materials
The constitutive modelling of granular, porous and quasi-brittle materials is
based on yield (or damage) functions, which may exhibit features (for instance,
lack of convexity, or branches where the values go to infinity, or false
elastic domains) preventing the use of efficient return-mapping integration
schemes. This problem is solved by proposing a general construction strategy to
define an implicitly defined convex yield function starting from any convex
yield surface. Based on this implicit definition of the yield function, a
return-mapping integration scheme is implemented and tested for elastic-plastic
(or -damaging) rate equations. The scheme is general and, although it
introduces a numerical cost when compared to situations where the scheme is not
needed, is demonstrated to perform correctly and accurately.Comment: 19 page
Simple, compact and robust approximate string dictionary
This paper is concerned with practical implementations of approximate string
dictionaries that allow edit errors. In this problem, we have as input a
dictionary of strings of total length over an alphabet of size
. Given a bound and a pattern of length , a query has to
return all the strings of the dictionary which are at edit distance at most
from , where the edit distance between two strings and is defined as
the minimum-cost sequence of edit operations that transform into . The
cost of a sequence of operations is defined as the sum of the costs of the
operations involved in the sequence. In this paper, we assume that each of
these operations has unit cost and consider only three operations: deletion of
one character, insertion of one character and substitution of a character by
another. We present a practical implementation of the data structure we
recently proposed and which works only for one error. We extend the scheme to
. Our implementation has many desirable properties: it has a very
fast and space-efficient building algorithm. The dictionary data structure is
compact and has fast and robust query time. Finally our data structure is
simple to implement as it only uses basic techniques from the literature,
mainly hashing (linear probing and hash signatures) and succinct data
structures (bitvectors supporting rank queries).Comment: Accepted to a journal (19 pages, 2 figures
- …