8,343 research outputs found
Ultrasonic distance detection for a closed-loop spinal cord stimulation system
When stimulating the spinal cord at a constant strength, the current density in the spinal cord and thus the effects on chronic, intractable pain and vascular insufficiency will change with body position, due to the varying separation of the spinal cord and the stimulating electrode. The current density in the spinal cord has to remain between the perception and discomfort threshold (stimulation window) for a good therapeutic effect, i.e. that the patient does not suffer from pain. The stimulation window is very small. In current SCS systems the stimulus applied to the electrode is set at a constant value. A major improvement could be achieved when the distance between stimulation electrode and spinal cord could be measured and used to control the stimulus amplitude in a closed-loop system. An ultrasonic piezoelectric transducer was chosen to measure the distance between the electrode and the spinal cor
Automated Verification of Practical Garbage Collectors
Garbage collectors are notoriously hard to verify, due to their low-level
interaction with the underlying system and the general difficulty in reasoning
about reachability in graphs. Several papers have presented verified
collectors, but either the proofs were hand-written or the collectors were too
simplistic to use on practical applications. In this work, we present two
mechanically verified garbage collectors, both practical enough to use for
real-world C# benchmarks. The collectors and their associated allocators
consist of x86 assembly language instructions and macro instructions, annotated
with preconditions, postconditions, invariants, and assertions. We used the
Boogie verification generator and the Z3 automated theorem prover to verify
this assembly language code mechanically. We provide measurements comparing the
performance of the verified collector with that of the standard Bartok
collectors on off-the-shelf C# benchmarks, demonstrating their competitiveness
Condition numbers in the boundary element method : shape and solvability
The boundary element method (BEM) is an efficient numerical method that approximates solutions of various boundary value problems. Despite its success little research has been performed on the conditioning of the linear systems that appear in the BEM. For a Laplace equation with Dirichlet boundary conditions a remarkable phenomenon is observed; the corresponding boundary integral equation (BIE) is singular for a certain critical size of the 2D domain. As a consequence the discrete counterpart of the BIE, the linear system, is singular too, or at least ill-conditioned. This is reflected by the condition number of the system matrix, which is infinitely large, or at least very large. When the condition number of the BEM-matrix is large, the linear system is difficult to solve and the solution of the system is very sensible to perturbations in the boundary data. For a Laplace equation with mixed boundary conditions a similar phenomenon is observed. The corresponding BEM-matrix consists of two blocks; one block originates from the BEM-matrix belonging to the Dirichlet problem, the other block originates from the BEM-matrix belonging to the Neumann problem. The composite matrix inherits the solvability problems from the Dirichlet block. In other words, for the Laplace equation with mixed boundary conditions there exists also a critical size of the 2D domain for which the BEM-matrix has an infinitely large condition number. Hence the size and shape of the domain affects the solvability of the BEM problem. The critical size of the domain for which the BIE becomes singular is related to the logarithmic capacity of the domain. The logarithmic capacity is a positive real number that is a function of the size and shape of the domain. If this logarithmic capacity is equal to one, the domain is a critical domain, and for this domain the BIE becomes singular. Thus by computing the logarithmic capacity we can a-priori determine whether the BIE will be singular or not. The logarithmic capacity depends linearly on the scale of the domain, and thus a domain with logarithmic capacity equal to one can always be found by rescaling the domain. Unfortunately the logarithmic capacity can only be computed analytically for a few simple domains; for more involved domains the logarithmic capacity can be estimated though. There are several possibilities to avoid large condition numbers, i.e. singular BIEs that appear at critical domains. The first option is to rescale the domain such that the logarithmic capacity is unequal to one. One can also add a supplementary condition to the BIE and the linear system. A drawback of this option is that the linear system has more equations than unknowns and different techniques are required to solve the system. A third option is to slightly modify the fundamental solution of the Laplace operator. This fundamental solution directly appears in the BIE and it can be shown that a suitable modification yields BIEs that do not become singular. The critical domains for which the BIEs become singular do not restrict to Laplace equations only. Also for BIEs applied to the biharmonic equation or the elastostatic equations and the Stokes equations such critical domains exist. As the last two equations are vectorial equations, also the corresponding BIE consists of two equations. As a consequence two critical domains can be found for which these BIEs become singular. To obtain nonsingular BIEs techniques similar to the Laplace case can be used. Unfortunately we cannot a-priori determine the sizes for which the BIEs becomes singular, and thus do not know to what size we should rescale the domain to obtain nonsingular BIEs. The existence of critical domains is in essence caused by the logarithmic term in the fundamental solutions for the elliptic boundary value problems in 2D. This logarithmic term does not depend linearly on the size of the domain. When a domain is scaled, i.e. multiplied by a scale factor, the argument of the logarithm is also multiplied by this scale factor, but the logarithm turns this into an additive term. Thus the logarithm transforms multiplication into addition. This affects the BIEs in such a way that critical domains can appear. The fundamental solutions of boundary value problems in 3D do not contain a logarithmic term. Hence scaling of the domain does not affect the fundamental solution, and consequently also the BIE is not affected. Hence we may safely rescale 3D domains without the risk to encounter a critical domain. An example in which a domain takes many different sizes and shapes is the blowing problem. In this problem a viscous fluid is blown to a desired shape. Typically the time is discretised into a set of discrete time steps, and at each step the shape of the fluid is computed by solving the Stokes equations. When attempting to simulate this problem in 2D, we meet a large number of 2D domains, and we risk that one of these domains is equal to or approaches a critical domain. In such a case the BEM will have difficulties with solving the Stokes equations for that particular domain. When simulating the blowing problem in 3D, no critical domains are encountered. It turns out that the BEM is a very efficient numerical method for this particular 3D problem with a moving boundary. As we are merely interested in the shape of the fluid, we only need to know the flow of its boundary. The BEM does exactly that; it does not compute the flow at the interior of the fluid. Furthermore it is rather easy to include other effects from the blowing problem in the model, such as gravity, surface tension and friction from the contact of the fluid with a wall. As only the boundary of the fluid is discretised, the system matrices that appear in the BEM are smaller than the system matrices that appear when solving the problem with a finite element method, for example. Though the BEM-matrices are dense, while the finite element matrices are sparse, the computational effort for the BEM is relatively low. In short, the BEMis a very appropriate numerical method when solving blowing problems
A heuristic explanation of Batcher's Baffler
AbstractBatcher's Bafflerâso named by David Griesâis a sorting algorithm that is of interest because many of its âcomparison swapsâ can be executed concurrently. It is also of interest because it used to be hard to explain.This note explains Batcher's Baffler by designing it. Besides including all heuristics, it has two distinguishing features, both contributing to its clarity and brevity: 1.(0) the (little) theory the algorithm relies upon is dealt with in isolation;2.(1) by suitable abstractions, all case analyses have been removed from the argument
Recommended from our members
Measuring the impact of observations on the predictability of the Kuroshio Extension in a shallow-water model
In this paper sequential importance sampling is used to assess the impact of observations on a ensemble prediction for the decadal path transitions of the Kuroshio Extension (KE). This particle ïŹltering approach gives access to the probability density of the state vector, which allows us to determine the predictive power â an entropy based measure â of the ensemble prediction. The proposed set-up makes use of an ensemble that, at each time, samples the climatological probability distribution. Then, in a post-processing step, the impact of diïŹerent sets of observations is measured by the increase in predictive power of the ensemble over the climatological signal during one-year. The method is applied in an identical-twin
experiment for the Kuroshio Extension using a reduced-gravity shallow water model. We investigate the impact of assimilating velocity observations from diïŹerent locations during the elongated and the contracted meandering state of the KE. Optimal observations location correspond to regions with strong potential vorticity gradients. For the elongated state the optimal location is in the ïŹrst meander of the KE. During the contracted state of the KE it is located south of Japan, where the Kuroshio separates from the coast
Trapping and Characterization of the Reaction Intermediate in Cyclodextrin Glycosyltransferase by Use of Activated Substrates and a Mutant Enzyme
Cyclodextrin glycosyltransferases (CGTases) catalyze the degradation of starch into linear or cyclic oligosaccharides via a glycosyl transfer reaction occurring with retention of anomeric configuration. They are also shown to catalyze the coupling of maltooligosaccharyl fluorides. Reaction is thought to proceed via a double-displacement mechanism involving a covalent glycosyl-enzyme intermediate. This intermediate can be trapped by use of 4-deoxymaltotriosyl α-fluoride (4DG3αF). This substrate contains a good leaving group, fluoride, thus facilitating formation of the intermediate, but cannot undergo the transglycosylation step since the nucleophilic hydroxyl group at the 4-position is missing. When 4DG3αF was reacted with wild-type CGTase (Bacillus circulans 251), it was found to be a slow substrate (kcat = 2 s-1) compared with the parent glycosyl fluoride, maltotriosyl R-fluoride (kcat = 275 s-1). Unfortunately, a competing hydrolysis reaction reduces the lifetime of the intermediate precluding its trapping and identification. However, when 4DG3αF was used in the presence of the presumed acid/base catalyst mutant Glu257Gln, the intermediate could be trapped and analyzed because the first step remained fast while the second step was further slowed (kcat = 0.6 s-1). Two glycosylated peptides were identified in a proteolytic digest of the inhibited enzyme by means of neutral loss tandem mass spectrometry. Edman sequencing of these labeled peptides allowed identification of Asp229 as the catalytic nucleophile and provided evidence for a covalent intermediate in CGTase. Asp229 is found to be conserved in all members of the family 13 glycosyl transferases.
A bifurcation study of the three-dimensional thermohaline ocean circulation: the double-hemispheric case
Within a low-resolution primitive-equation model of the three-dimensional
ocean circulation, a bifurcation analysis is performed of double-hemispheric basin
flows. Main focus is on the connection between results for steady two-dimensional
flows in a non-rotating basin and those for three-dimensional flows in a rotating
basin. With the use of continuation methods, branches of steady states are followed
in parameter space and their linear stability is monitored. There is a close
qualitative similarity between the bifurcation structure of steady-state solutions
of the two- and three dimensional flows. In both cases, symmetry-breaking pitchfork
bifurcations are central in generating a multiple equilibria structure. The
locations of these pitchfork bifurcations in parameter space can be characterized
through a zero of the tendency of a particular energy functional. Although balances
controlling the steady-state ?ows are quantitatively very di?erent, the zonally
averaged patterns of the perturbations associated with symmetry-breaking
are remarkably similar for two-dimensional and three-dimensional ?ows, and the
energetics of the symmetry-breaking mechanism is in essence the same
- âŠ