702 research outputs found
Saturated-Unsaturated flow in a Compressible Leaky-unconfined Aquifer
An analytical solution is developed for three-dimensional flow towards a
partially penetrating large-diameter well in an unconfined aquifer bounded
below by an aquitard of finite or semi-infinite extent. The analytical solution
is derived using Laplace and Hankel transforms, then inverted numerically.
Existing solutions for flow in leaky unconfined aquifers neglect the
unsaturated zone following an assumption of instantaneous drainage assumption
due to Neuman [1972]. We extend the theory of leakage in unconfined aquifers by
(1) including water flow and storage in the unsaturated zone above the water
table, and (2) allowing the finite-diameter pumping well to partially penetrate
the aquifer. The investigation of model-predicted results shows that leakage
from an underlying aquitard leads to significant departure from the unconfined
solution without leakage. The investigation of dimensionless time-drawdown
relationships shows that the aquitard drawdown also depends on unsaturated zone
properties and the pumping-well wellbore storage effects
A semi-analytical solution for transient streaming potentialsassociated with confined aquifer pumping tests
We consider the transient streaming potential response due to pumping from a confined aquifer through a fully penetrating line sink. Confined aquifer flow is assumed to occur without fluid leakage from the confining units. However, since confining units are typically clayey, and hence more electrically conductive than the aquifer, they are treated as non-insulating in our three-layer conceptual model. We develop a semi-analytical solution for the transient streaming potentials response of the aquifer and the confining units to pumping of the aquifer. The solution is fitted to field measurements of streaming potentials associated with an aquifer test performed at a site located near Montalto Uffugo, in the region of Calabria in Southern Italy. This yields an average hydraulic conductivity that compares well to the estimate obtained using only hydraulic head data. Specific storage is estimated with greater estimation uncertainty than hydraulic conductivity and is significantly smaller than that estimated from hydraulic head data. This indicates that specific storage may be a more difficult parameter to estimate from streaming potential data. The mismatch may also be due to the fact that only recovery streaming potential data were used here whereas head data for both production and recovery were used. The estimate from head data may also constitute an upper bound since head data were not corrected for pumping and observation wellbore storage. Estimated values of the electrical conductivities of the confining units compare well to those estimated using electrical resistivity tomography. Our work indicates that, where observation wells are unavailable to provide more direct estimates, streaming potential data collected at land surface may, in principle, be used to provide preliminary estimates of aquifer hydraulic conductivity and specific storage, where the latter is estimated with greater uncertainty than the former
Effect of Grazing Intensity and Range Condition on Hydrology of Western South Dakota Ranges
Range livestock production is a primary industry in the Northern Great Plains. Efficiency of operation is important in this industry because of current low livestock prices, couples with the high cost of necessary inputs. Proper stocking rate is the most important single factor affecting sustained net returns from South Dakota rangeland. Stocking rates which are too light result in lowered income. In contrast, heavy grazing results in a damages resource and poorer range condition. Summarized here are 10 years of a continuing study, initiated in 1963 on experimental pastures of the Range and Livestock Experiment Station, Cottonwood, South Dakota. This study at the South Dakota State University Agricultural Experiment Station facility investigated effects of grazing intensity and range condition on water runoff and water economy of a western South Dakota range
An Exact Algorithm for Side-Chain Placement in Protein Design
Computational protein design aims at constructing novel or improved functions
on the structure of a given protein backbone and has important applications in
the pharmaceutical and biotechnical industry. The underlying combinatorial
side-chain placement problem consists of choosing a side-chain placement for
each residue position such that the resulting overall energy is minimum. The
choice of the side-chain then also determines the amino acid for this position.
Many algorithms for this NP-hard problem have been proposed in the context of
homology modeling, which, however, reach their limits when faced with large
protein design instances.
In this paper, we propose a new exact method for the side-chain placement
problem that works well even for large instance sizes as they appear in protein
design. Our main contribution is a dedicated branch-and-bound algorithm that
combines tight upper and lower bounds resulting from a novel Lagrangian
relaxation approach for side-chain placement. Our experimental results show
that our method outperforms alternative state-of-the art exact approaches and
makes it possible to optimally solve large protein design instances routinely
Don't Stop Thinking About Leptoquarks: Constructing New Models
We discuss the general framework for the construction of new models
containing a single, fermion number zero scalar leptoquark of mass GeV which can both satisfy the D0/CDF search constraints as well as
low energy data, and can lead to both neutral and charged current-like final
states at HERA. The class of models of this kind necessarily contain new
vector-like fermions with masses at the TeV scale which mix with those of the
Standard Model after symmetry breaking. In this paper we classify all models of
this type and examine their phenomenological implications as well as their
potential embedding into SUSY and non-SUSY GUT scenarios. The general coupling
parameter space allowed by low energy as well as collider data for these models
is described and requires no fine-tuning of the parameters.Comment: Modified text, added table, and updated reference
Mitochondria form cholesterol-rich contact sites with the nucleus during retrograde response
Cholesterol metabolism is pivotal to cellular homeostasis, hormones production, and membranes composition. Its dysregulation associates with malignant reprogramming and therapy resistance. Cholesterol is trafficked into the mitochondria for steroidogenesis by the transduceome protein complex, which assembles on the outer mitochondrial membrane (OMM). The highly conserved, cholesterol-binding, stress-reactive, 18kDa translocator protein (TSPO), is a key component of this complex. Here, we modulate TSPO to study the process of mitochondrial retrograde signalling with the nucleus, by dissecting the role played by cholesterol and its oxidized forms. Using confocal and ultrastructural imaging, we describe that TSPO enriched mitochondria, remodel around the nucleus, gathering in cholesterol-enriched domains (or contact sites). This communication is controlled by HMG-CoA reductase inhibitors (statins), molecular and pharmacological regulation of TSPO. The described Nucleus-Associated Mitochondria (NAM) seem to be implementing survival signalling in aggressive forms of breast cancer. This work therefore provides the first evidence for a functional and bio-mechanical tethering between mitochondria and nucleus, as being the basis of pro-survival mechanisms, thus establishing a new paradigm in cross-organelle communication via cholesterol re-distribution
Power grip, pinch grip, manual muscle testing or thenar atrophy - which should be assessed as a motor outcome after carpal tunnel decompression? A systematic review
<p>Abstract</p> <p>Background</p> <p>Objective assessment of motor function is frequently used to evaluate outcome after surgical treatment of carpal tunnel syndrome (CTS). However a range of outcome measures are used and there appears to be no consensus on which measure of motor function effectively captures change. The purpose of this systematic review was to identify the methods used to assess motor function in randomized controlled trials of surgical interventions for CTS. A secondary aim was to evaluate which instruments reflect clinical change and are psychometrically robust.</p> <p>Methods</p> <p>The bibliographic databases Medline, AMED and CINAHL were searched for randomized controlled trials of surgical interventions for CTS. Data on instruments used, methods of assessment and results of tests of motor function was extracted by two independent reviewers.</p> <p>Results</p> <p>Twenty-two studies were retrieved which included performance based assessments of motor function. Nineteen studies assessed power grip dynamometry, fourteen studies used both power and pinch grip dynamometry, eight used manual muscle testing and five assessed the presence or absence of thenar atrophy. Several studies used multiple tests of motor function. Two studies included both power and pinch strength and reported descriptive statistics enabling calculation of effect sizes to compare the relative responsiveness of grip and pinch strength within study samples. The study findings suggest that tip pinch is more responsive than lateral pinch or power grip up to 12 weeks following surgery for CTS.</p> <p>Conclusion</p> <p>Although used most frequently and known to be reliable, power and key pinch dynamometry are not the most valid or responsive tools for assessing motor outcome up to 12 weeks following surgery for CTS. Tip pinch dynamometry more specifically targets the thenar musculature and appears to be more responsive. Manual muscle testing, which in theory is most specific to the thenar musculature, may be more sensitive if assessed using a hand held dynamometer – the Rotterdam Intrinsic Handheld Myometer. However further research is needed to evaluate its reliability and responsiveness and establish the most efficient and psychometrically robust method of evaluating motor function following surgery for CTS.</p
Computational Design of a PAK1 Binding Protein
We describe a computational protocol, called DDMI, for redesigning scaffold proteins to bind to a specified region on a target protein. The DDMI protocol is implemented within the Rosetta molecular modeling program and uses rigid-body docking, sequence design, and gradient-based minimization of backbone and side chain torsion angles to design low energy interfaces between the scaffold and target protein. Iterative rounds of sequence design and conformational optimization were needed to produce models that have calculated binding energies that are similar to binding energies calculated for native complexes. We also show that additional conformation sampling with molecular dynamics can be iterated with sequence design to further lower the computed energy of the designed complexes. To experimentally test the DDMI protocol we redesigned the human hyperplastic discs protein to bind to the kinase domain of p21-activated kinase 1 (PAK1). Six designs were experimentally characterized. Two of the designs aggregated and were not characterized further. Of the remaining four designs, three bound to the PAK1 with affinities tighter than 350 μM. The tightest binding design, named Spider Roll, bound with an affinity of 100 μM. NMR –based structure prediction of Spider Roll based on backbone and 13Cβ chemical shifts using the program CS-ROSETTA indicated that the architecture of human hyperplastic discs protein is preserved. Mutagenesis studies confirmed that Spider Roll binds the target patch on PAK1. Additionally, Spider Roll binds to full length PAK1 in its activated state, but does not bind PAK1 when it forms an auto-inhibited conformation that blocks the Spider Roll target site. Subsequent NMR characterization of the binding of Spider Roll to PAK1 revealed a comparably small binding `on-rate' constant (<< 105 M−1 s−1). The ability to rationally design the site of novel protein-protein interactions is an important step towards creating new proteins that are useful as therapeutics or molecular probes
A Generic Program for Multistate Protein Design
Some protein design tasks cannot be modeled by the traditional single state design strategy of finding a sequence that is optimal for a single fixed backbone. Such cases require multistate design, where a single sequence is threaded onto multiple backbones (states) and evaluated for its strengths and weaknesses on each backbone. For example, to design a protein that can switch between two specific conformations, it is necessary to to find a sequence that is compatible with both backbone conformations. We present in this paper a generic implementation of multistate design that is suited for a wide range of protein design tasks and demonstrate in silico its capabilities at two design tasks: one of redesigning an obligate homodimer into an obligate heterodimer such that the new monomers would not homodimerize, and one of redesigning a promiscuous interface to bind to only a single partner and to no longer bind the rest of its partners. Both tasks contained negative design in that multistate design was asked to find sequences that would produce high energies for several of the states being modeled. Success at negative design was assessed by computationally redocking the undesired protein-pair interactions; we found that multistate design's accuracy improved as the diversity of conformations for the undesired protein-pair interactions increased. The paper concludes with a discussion of the pitfalls of negative design, which has proven considerably more challenging than positive design
- …