95,738 research outputs found
Increased Metabolic Rate in X-linked Hypophosphatemic Mice
Hyp mice are a model for human X-linked hypophosphatemia, the most common form of vitamin D-resistant rickets. It has previously been observed that Hyp mice have a greater food consumption per gram body weight than do normal mice. This led to the search for some alteration in metabolism in Hyp mice. We found that oxygen consumption was significantly higher in Hyp mice than in normal C57BL/6J mice and this was accompanied by an increased percentage of cardiac output being delivered to organs of heat production (liver and skeletal muscle), to the skin, and to bone and a decreased percentage to the gastrointestinal tract of Hyp mice. The increased oxygen consumption in Hyp mice was not associated with increased plasma free T4 levels and was not affected by alterations in plasma phosphate produced by a low phosphate diet. The cause of the increased oxygen consumption is not known, and the role that this change and reported changes in distribution of cardiac output may play in the development of X-linked hypophosphatemia is also unknown. Study of the cardiovascular and thermoregulatory systems in Hyp mice should help increase understanding of the underlying mechanisms of this disease
Aerial capsule emergency separation device Patent
Aerial capsule emergency separation device using jettisonable tower
Listeners normalize speech for contextual speech rate even without an explicit recognition task
Speech can be produced at different rates. Listeners take this rate variation into account by normalizing vowel duration for contextual speech rate: An ambiguous Dutch word /m?t/ is perceived as short /mAt/ when embedded in a slow context, but long /ma:t/ in a fast context. Whilst some have argued that this rate normalization involves low-level automatic perceptual processing, there is also evidence that it arises at higher-level cognitive processing stages, such as decision making. Prior research on rate-dependent speech perception has only used explicit recognition tasks to investigate the phenomenon, involving both perceptual processing and decision making. This study tested whether speech rate normalization can be observed without explicit decision making, using a cross-modal repetition priming paradigm. Results show that a fast precursor sentence makes an embedded ambiguous prime (/m?t/) sound (implicitly) more /a:/-like, facilitating lexical access to the long target word "maat" in a (explicit) lexical decision task. This result suggests that rate normalization is automatic, taking place even in the absence of an explicit recognition task. Thus, rate normalization is placed within the realm of everyday spoken conversation, where explicit categorization of ambiguous sounds is rare
Modality effects in vocabulary acquisition
It is unknown whether modality affects the efficiency with which humans learn novel word forms and their meanings, with previous studies reporting both written and auditory advantages. The current study implements controls whose absence in previous work likely offers explanation for such contradictory findings. In two novel word learning experiments, participants were trained and tested on pseudoword - novel object pairs, with controls on: modality of test, modality of meaning, duration of exposure and transparency of word form. In both experiments word forms were presented in either their written or spoken form, each paired with a pictorial meaning (novel object). Following a 20-minute filler task, participants were tested on their ability to identify the picture-word form pairs on which they were trained. A between subjects design generated four participant groups per experiment 1) written training, written test; 2) written training, spoken test; 3) spoken training, written test; 4) spoken training, spoken test. In Experiment 1 the written stimulus was presented for a time period equal to the duration of the spoken form. Results showed that when the duration of exposure was equal, participants displayed a written training benefit. Given words can be read faster than the time taken for the spoken form to unfold, in Experiment 2 the written form was presented for 300 ms, sufficient time to read the word yet 65% shorter than the duration of the spoken form. No modality effect was observed under these conditions, when exposure to the word form was equivalent. These results demonstrate, at least for proficient readers, that when exposure to the word form is controlled across modalities the efficiency with which word form-meaning associations are learnt does not differ. Our results therefore suggest that, although we typically begin as aural-only word learners, we ultimately converge on developing learning mechanisms that learn equally efficiently from both written and spoken materials
Imperfections in focal conic domains: the role of dislocations
It is usual to think of Focal Conic Domains (FCD) as perfect geometric
constructions in which the layers are folded into Dupin cyclides, about an
ellipse and a hyperbola that are conjugate. This ideal picture is often far
from reality. We have investigated in detail the FCDs in several materials
which have a transition from a smectic A (SmA) to a nematic phase. The ellipse
and the hyperbola are seldom perfect, and the FCD textures also suffer large
transformations (in shape or/and in nature) when approaching the transition to
the nematic phase, or appear imperfect on cooling from the nematic phase. We
interpret these imperfections as due to the interaction of FCDs with
dislocations. We analyze theoretically the general principles subtending the
interaction mechanisms between FCDs and finite Burgers vector dislocations,
namely the formation of kinks on disclinations, to which dislocations are
attached, and we present models relating to some experimental results. Whereas
the principles of the interactions are very general, their realizations can
differ widely in function of the boundary conditions.Comment: 19 pages, 18 figure
Space engine safety system
A rocket engine safety system was designed to initiate control procedures to minimize damage to the engine or vehicle or test stand in the event of an engine failure. The features and the implementation issues associated with rocket engine safety systems are discussed, as well as the specific concerns of safety systems applied to a space-based engine and long duration space missions. Examples of safety system features and architectures are given, based on recent safety monitoring investigations conducted for the Space Shuttle Main Engine and for future liquid rocket engines. Also, the general design and implementation process for rocket engine safety systems is presented
Periodicity and Growth in a Lattice Gas with Dynamical Geometry
We study a one-dimensional lattice gas "dynamical geometry model" in which
local reversible interactions of counter-rotating groups of particles on a ring
can create or destroy lattice sites. We exhibit many periodic orbits and and
show that all other solutions have asymptotically growing lattice length in
both directions of time. We explain why the length grows as in all
cases examined. We completely solve the dynamics for small numbers of particles
with arbitrary initial conditions.Comment: 18 pages, LaTe
The application of neural networks to the SSME startup transient
Feedforward neural networks were used to model three parameters during the Space Shuttle Main Engine startup transient. The three parameters were the main combustion chamber pressure, a controlled parameter, the high pressure oxidizer turbine discharge temperature, a redlined parameter, and the high pressure fuel pump discharge pressure, a failure-indicating performance parameter. Network inputs consisted of time windows of data from engine measurements that correlated highly to the modeled parameter. A standard backpropagation algorithm was used to train the feedforward networks on two nominal firings. Each trained network was validated with four additional nominal firings. For all three parameters, the neural networks were able to accurately predict the data in the validation sets as well as the training set
A formal method for identifying distinct states of variability in time-varying sources: SgrA* as an example
Continuously time variable sources are often characterized by their power
spectral density and flux distribution. These quantities can undergo dramatic
changes over time if the underlying physical processes change. However, some
changes can be subtle and not distinguishable using standard statistical
approaches. Here, we report a methodology that aims to identify distinct but
similar states of time variability. We apply this method to the Galactic
supermassive black hole, where 2.2 um flux is observed from a source associated
with SgrA*, and where two distinct states have recently been suggested. Our
approach is taken from mathematical finance and works with conditional flux
density distributions that depend on the previous flux value. The discrete,
unobserved (hidden) state variable is modeled as a stochastic process and the
transition probabilities are inferred from the flux density time series. Using
the most comprehensive data set to date, in which all Keck and a majority of
the publicly available VLT data have been merged, we show that SgrA* is
sufficiently described by a single intrinsic state. However the observed flux
densities exhibit two states: a noise-dominated and a source-dominated one. Our
methodology reported here will prove extremely useful to assess the effects of
the putative gas cloud G2 that is on its way toward the black hole and might
create a new state of variability.Comment: Submitted to ApJ; 33 pages, 4 figures; comments welcom
Road blocks on paleogenomes - polymerase extension profiling reveals the frequency of blocking lesions in ancient DNA
Although the last few years have seen great progress in DNA sequence retrieval from fossil specimens, some of the characteristics of ancient DNA remain poorly understood. This is particularly true for blocking lesions, i.e. chemical alterations that cannot be bypassed by DNA polymerases and thus prevent amplification and subsequent sequencing of affected molecules. Some studies have concluded that the vast majority of ancient DNA molecules carry blocking lesions, suggesting that the removal, repair or bypass of blocking lesions might dramatically increase both the time depth and geographical range of specimens available for ancient DNA analysis. However, previous studies used very indirect detection methods that did not provide conclusive estimates on the frequency of blocking lesions in endogenous ancient DNA. We developed a new method, polymerase extension profiling (PEP), that directly reveals occurrences of polymerase stalling on DNA templates. By sequencing thousands of single primer extension products using PEP methodology, we have for the first time directly identified blocking lesions in ancient DNA on a single molecule level. Although we found clear evidence for blocking lesions in three out of four ancient samples, no more than 40% of the molecules were affected in any of the samples, indicating that such modifications are far less frequent in ancient DNA than previously thought
- …