2,580 research outputs found
AUDITORIALLY EVOKED RESPONSES: A Tool for Assessing the Patient's Ability to Hear during Various States of Consciousness
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/66273/1/j.1399-6576.1966.tb01117.x.pd
Pharmacologic effects of CIâ581, a new dissociative anesthetic, in man
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/116940/1/cpt196563279.pd
Quantum control of atomic systems by time resolved homodyne detection and feedback
We investigate the possibilities of preserving and manipulating the coherence
of atomic two-level systems by ideal projective homodyne detection and
feedback. For this purpose, the photon emission process is described on time
scales much shorter than the lifetime of the excited state using a model based
on Wigner-Weisskopf theory. The backaction of this emission process is
analytically described as a quantum diffusion of the Bloch vector. It is shown
that the evolution of the atomic wavefunction can be controlled completely
using the results of homodyne detection. This allows the stabilization of a
known quantum state or the creation of coherent states by a feedback mechanism.
However, the feedback mechanism can never compensate the dissipative effects of
quantum fluctuations even though the coherent state of the system is known at
all times.Comment: 12 pages RevTex and 7 figures, to be published in Phys. Rev. A, final
versio
Memory-related cognitive load effects in an interrupted learning task:A model-based explanation
Background: The Cognitive Load Theory provides a well-established framework for investigating aspects of learning situations that demand learners' working memory resources. However, the interplay of these aspects at the cognitive and neural level is still not fully understood. Method: We developed four computational models in the cognitive architecture ACT-R to clarify underlying memory-related strategies and mechanisms. Our models account for human data of an experiment that required participants to perform a symbol sequence learning task with embedded interruptions. We explored the inclusion of subsymbolic mechanisms to explain these data and used our final model to generate fMRI predictions. Results: The final model indicates a reasonable fit for reaction times and accuracy and links the fMRI predictions to the Cognitive Load Theory. Conclusions: Our work emphasizes the influence of task characteristics and supports a process-related view on cognitive load in instructional scenarios. It further contributes to the discussion of underlying mechanisms at a neural level
The Domain Chaos Puzzle and the Calculation of the Structure Factor and Its Half-Width
The disagreement of the scaling of the correlation length xi between
experiment and the Ginzburg-Landau (GL) model for domain chaos was resolved.
The Swift-Hohenberg (SH) domain-chaos model was integrated numerically to
acquire test images to study the effect of a finite image-size on the
extraction of xi from the structure factor (SF). The finite image size had a
significant effect on the SF determined with the Fourier-transform (FT) method.
The maximum entropy method (MEM) was able to overcome this finite image-size
problem and produced fairly accurate SFs for the relatively small image sizes
provided by experiments.
Correlation lengths often have been determined from the second moment of the
SF of chaotic patterns because the functional form of the SF is not known.
Integration of several test functions provided analytic results indicating that
this may not be a reliable method of extracting xi. For both a Gaussian and a
squared SH form, the correlation length xibar=1/sigma, determined from the
variance sigma^2 of the SF, has the same dependence on the control parameter
epsilon as the length xi contained explicitly in the functional forms. However,
for the SH and the Lorentzian forms we find xibar ~ xi^1/2.
Results for xi determined from new experimental data by fitting the
functional forms directly to the experimental SF yielded xi ~ epsilon^-nu} with
nu ~= 1/4 for all four functions in the case of the FT method, but nu ~= 1/2,
in agreement with the GL prediction, in the the case of the MEM. Over a wide
range of epsilon and wave number k, the experimental SFs collapsed onto a
unique curve when appropriately scaled by xi.Comment: 15 pages, 26 figures, 1 tabl
The DoF-Box project: An educational kit for configurable robots
China: Shanghai. AirportCNAC is the China National Aviation Corporation. The Shanghai Longhua Airport.GrayscaleForman Nitrate Negatives, Box 2
On Learning the Statistical Representation of a Task and Generalizing it to Various Contexts
This paper presents an architecture for solving generically the problem of extracting the relevant features of a given task in a programming by demonstration framework and the problem of generalizing the acquired knowledge to various contexts. We validate the architecture in a series of experiments, where a human demonstrator teaches a humanoid robot simple manipulatory tasks. Extracting the relevant features of the task is solved in a two-step process of dimensionality reduction. First, the combined joint angles and hand path motions are projected into a generic latent space, composed of a mixture of Gaussians (GMM) spreading across the spatial dimensions of the motion. Second, the temporal variation of the latent representation of the motion is encoded in a Hidden Markov Model (HMM). This two- step probabilistic encoding provides a measure of the spatio-temporal correlations across the different modalities collected by the robot, which determines a metric of imitation performance. A generalization of the demonstrated trajectories is then performed using Gaussians Mixture Regression (GMR). Finally, to generalize skills across contexts, we compute formally the trajectory that optimizes the metric, given the new context and the robot's specific body constraints
On Learning the Statistical Representation of a Task and Generalizing it to Various Contexts
This paper presents an architecture for solving generically the problem of extracting the relevant features of a given task in a programming by demonstration framework and the problem of generalizing the acquired knowledge to various contexts. We validate the architecture in a series of experiments, where a human demonstrator teaches a humanoid robot simple manipulatory tasks. Extracting the relevant features of the task is solved in a two-step process of dimensionality reduction. First, the combined joint angles and hand path motions are projected into a generic latent space, composed of a mixture of Gaussians (GMM) spreading across the spatial dimensions of the motion. Second, the temporal variation of the latent representation of the motion is encoded in a Hidden Markov Model (HMM). This two- step probabilistic encoding provides a measure of the spatio-temporal correlations across the different modalities collected by the robot, which determines a metric of imitation performance. A generalization of the demonstrated trajectories is then performed using Gaussians Mixture Regression (GMR). Finally, to generalize skills across contexts, we compute formally the trajectory that optimizes the metric, given the new context and the robot's specific body constraints
On Learning, Representing and Generalizing a Task in a Humanoid Robot
We present a Programming by Demonstration (PbD) framework for generically extracting the relevant features of a given task and for addressing the problem of generalizing the acquired knowledge to different contexts. We validate the architecture through a series of experiments in which a human demonstrator teaches a humanoid robot some simple manipulatory tasks. A probability based estimation of the relevance is suggested, by first projecting the joint angles, hand paths, and object-hand trajectories onto a generic latent space using Principal Component Analysis (PCA). The resulting signals were then encoded using a mixture of Gaussian/Bernoulli distributions (GMM/BMM). This provides a measure of the spatio-temporal correlations across the different modalities collected from the robot which can be used to determine a metric of the imitation performance. The trajectories are then generalized using Gaussian Mixture Regression (GMR). Finally, we analytically compute the trajectory which optimizes the imitation metric and use this to generalize the skill to different contexts and to the robot's specific bodily constraints
- âŠ