1,285 research outputs found
Anderson's considerations on the flow of superfluid helium: some offshoots
Nearly five decades have elapsed since the seminal 1966 paper of P.W.
Anderson on the flow of superfluid helium, He at that time. Some of his
"Considerations" -- the role of the quantum phase as a dynamical variable, the
interplay between the motion of quantised vortices and potential superflow, its
incidence on dissipation in the superfluid and the appearance of critical
velocities, the quest for the hydrodynamic analogues of the Josephson effects
in helium -- and the way they have evolved over the past half-century are
recounted below. But it is due to key advances on the experimental front that
phase slippage could be harnessed in the laboratory, leading to a deeper
understanding of superflow, vortex nucleation, the various intrinsic and
extrinsic dissipation mechanisms in superfluids, macroscopic quantum effects
and the superfluid analogue of both {\it ac} and {\it dc} Josephson effects --
pivotal concepts in superfluid physics -- have been performed. Some of the
experiments that have shed light on the more intimate effect of quantum
mechanics on the hydrodynamics of the dense heliums are surveyed, including the
nucleation of quantised vortices both by Arrhenius processes and by macroscopic
quantum tunnelling, the setting up of vortex mills, and superfluid
interferometry.Comment: Review article - 59 pages - 34 figures - submitted to the Review of
Modern Physic
Learning and comparing functional connectomes across subjects
Functional connectomes capture brain interactions via synchronized
fluctuations in the functional magnetic resonance imaging signal. If measured
during rest, they map the intrinsic functional architecture of the brain. With
task-driven experiments they represent integration mechanisms between
specialized brain areas. Analyzing their variability across subjects and
conditions can reveal markers of brain pathologies and mechanisms underlying
cognition. Methods of estimating functional connectomes from the imaging signal
have undergone rapid developments and the literature is full of diverse
strategies for comparing them. This review aims to clarify links across
functional-connectivity methods as well as to expose different steps to perform
a group study of functional connectomes
Sagnac effect in superfluid liquids
International audienceThe interpretation of the Sagnac effect is re-examined in the context of recent cold atomic beam and superfluid experiments. A widespread misconception concerning the understanding of this effect in a superfluid liquid is discussed
Compressed Online Dictionary Learning for Fast fMRI Decomposition
We present a method for fast resting-state fMRI spatial decomposi-tions of
very large datasets, based on the reduction of the temporal dimension before
applying dictionary learning on concatenated individual records from groups of
subjects. Introducing a measure of correspondence between spatial
decompositions of rest fMRI, we demonstrates that time-reduced dictionary
learning produces result as reliable as non-reduced decompositions. We also
show that this reduction significantly improves computational scalability
Social-sparsity brain decoders: faster spatial sparsity
Spatially-sparse predictors are good models for brain decoding: they give
accurate predictions and their weight maps are interpretable as they focus on a
small number of regions. However, the state of the art, based on total
variation or graph-net, is computationally costly. Here we introduce sparsity
in the local neighborhood of each voxel with social-sparsity, a structured
shrinkage operator. We find that, on brain imaging classification problems,
social-sparsity performs almost as well as total-variation models and better
than graph-net, for a fraction of the computational cost. It also very clearly
outlines predictive regions. We give details of the model and the algorithm.Comment: in Pattern Recognition in NeuroImaging, Jun 2016, Trento, Italy. 201
On spatial selectivity and prediction across conditions with fMRI
Researchers in functional neuroimaging mostly use activation coordinates to
formulate their hypotheses. Instead, we propose to use the full statistical
images to define regions of interest (ROIs). This paper presents two machine
learning approaches, transfer learning and selection transfer, that are
compared upon their ability to identify the common patterns between brain
activation maps related to two functional tasks. We provide some preliminary
quantification of these similarities, and show that selection transfer makes it
possible to set a spatial scale yielding ROIs that are more specific to the
context of interest than with transfer learning. In particular, selection
transfer outlines well known regions such as the Visual Word Form Area when
discriminating between different visual tasks.Comment: PRNI 2012 : 2nd International Workshop on Pattern Recognition in
NeuroImaging, London : United Kingdom (2012
Mapping cognitive ontologies to and from the brain
Imaging neuroscience links brain activation maps to behavior and cognition
via correlational studies. Due to the nature of the individual experiments,
based on eliciting neural response from a small number of stimuli, this link is
incomplete, and unidirectional from the causal point of view. To come to
conclusions on the function implied by the activation of brain regions, it is
necessary to combine a wide exploration of the various brain functions and some
inversion of the statistical inference. Here we introduce a methodology for
accumulating knowledge towards a bidirectional link between observed brain
activity and the corresponding function. We rely on a large corpus of imaging
studies and a predictive engine. Technically, the challenges are to find
commonality between the studies without denaturing the richness of the corpus.
The key elements that we contribute are labeling the tasks performed with a
cognitive ontology, and modeling the long tail of rare paradigms in the corpus.
To our knowledge, our approach is the first demonstration of predicting the
cognitive content of completely new brain images. To that end, we propose a
method that predicts the experimental paradigms across different studies.Comment: NIPS (Neural Information Processing Systems), United States (2013
Testing for Differences in Gaussian Graphical Models: Applications to Brain Connectivity
Functional brain networks are well described and estimated from data with
Gaussian Graphical Models (GGMs), e.g. using sparse inverse covariance
estimators. Comparing functional connectivity of subjects in two populations
calls for comparing these estimated GGMs. Our goal is to identify differences
in GGMs known to have similar structure. We characterize the uncertainty of
differences with confidence intervals obtained using a parametric distribution
on parameters of a sparse estimator. Sparse penalties enable statistical
guarantees and interpretable models even in high-dimensional and low-sample
settings. Characterizing the distributions of sparse models is inherently
challenging as the penalties produce a biased estimator. Recent work invokes
the sparsity assumptions to effectively remove the bias from a sparse estimator
such as the lasso. These distributions can be used to give confidence intervals
on edges in GGMs, and by extension their differences. However, in the case of
comparing GGMs, these estimators do not make use of any assumed joint structure
among the GGMs. Inspired by priors from brain functional connectivity we derive
the distribution of parameter differences under a joint penalty when parameters
are known to be sparse in the difference. This leads us to introduce the
debiased multi-task fused lasso, whose distribution can be characterized in an
efficient manner. We then show how the debiased lasso and multi-task fused
lasso can be used to obtain confidence intervals on edge differences in GGMs.
We validate the techniques proposed on a set of synthetic examples as well as
neuro-imaging dataset created for the study of autism
Trapping electrons in electrostatic traps over the surface of helium
We have observed trapping of electrons in an electrostatic trap formed over
the surface of liquid helium-4. These electrons are detected by a Single
Electron Transistor located at the centre of the trap. We can trap any desired
number of electrons between 1 and . By repeatedly (
times) putting a single electron into the trap and lowering the electrostatic
barrier of the trap, we can measure the effective temperature of the electron
and the time of its thermalisation after heating up by incoherent radiation.Comment: Presented at QFS06 - Kyoto, to be published in J. Low Temp. Phys., 6
pages, 3 figure
- …