26,574 research outputs found
Simplicity of Completion Time Distributions for Common Complex Biochemical Processes
Biochemical processes typically involve huge numbers of individual reversible
steps, each with its own dynamical rate constants. For example, kinetic
proofreading processes rely upon numerous sequential reactions in order to
guarantee the precise construction of specific macromolecules. In this work, we
study the transient properties of such systems and fully characterize their
first passage (completion) time distributions. In particular, we provide
explicit expressions for the mean and the variance of the completion time for a
kinetic proofreading process and computational analyses for more complicated
biochemical systems. We find that, for a wide range of parameters, as the
system size grows, the completion time behavior simplifies: it becomes either
deterministic or exponentially distributed, with a very narrow transition
between the two regimes. In both regimes, the dynamical complexity of the full
system is trivial compared to its apparent structural complexity. Similar
simplicity is likely to arise in the dynamics of many complex multi-step
biochemical processes. In particular, these findings suggest not only that one
may not be able to understand individual elementary reactions from macroscopic
observations, but also that such understanding may be unnecessary
Guided Proofreading of Automatic Segmentations for Connectomics
Automatic cell image segmentation methods in connectomics produce merge and
split errors, which require correction through proofreading. Previous research
has identified the visual search for these errors as the bottleneck in
interactive proofreading. To aid error correction, we develop two classifiers
that automatically recommend candidate merges and splits to the user. These
classifiers use a convolutional neural network (CNN) that has been trained with
errors in automatic segmentations against expert-labeled ground truth. Our
classifiers detect potentially-erroneous regions by considering a large context
region around a segmentation boundary. Corrections can then be performed by a
user with yes/no decisions, which reduces variation of information 7.5x faster
than previous proofreading methods. We also present a fully-automatic mode that
uses a probability threshold to make merge/split decisions. Extensive
experiments using the automatic approach and comparing performance of novice
and expert users demonstrate that our method performs favorably against
state-of-the-art proofreading methods on different connectomics datasets.Comment: Supplemental material available at
http://rhoana.org/guidedproofreading/supplemental.pd
Metacognition for spelling in higher etudents with dyslexia: is there evidence for the dual burden hypothesis?
We examined whether academic and professional bachelor students with dyslexia are able to compensate for their spelling deficits with metacognitive experience. Previous research suggested that students with dyslexia may suffer from a dual burden. Not only do they perform worse on spelling but in addition they are not as fully aware of their difficulties as their peers without dyslexia. According to some authors, this is the result of a worse feeling of confidence, which can be considered as a form of metacognition (metacognitive experience). We tried to isolate this metacognitive experience by asking 100 students with dyslexia and 100 matched control students to rate their feeling of confidence in a word spelling task and a proofreading task. Next, we used Signal Detection Analysis to disentangle the effects of proficiency and criterion setting. We found that students with dyslexia showed lower proficiencies but not suboptimal response biases. They were as good at deciding when they could be confident or not as their peers without dyslexia. They just had more cases in which their spelling was wrong. We conclude that the feeling of confidence in our students with dyslexia is as good as in their peers without dyslexia. These findings go against the Dual Burden theory (Kruger & Dunning, 1999), which assumes that people with a skills problem suffer twice as a result of insufficiently developed metacognitive competence. As a result, there is no gain to be expected from extra training of this metacognitive experience in higher education students with dyslexia
Causal Order and Kinds of Robustness
This paper derives from a broader project dealing with the notion of causal order. I use this term to signify two kinds of parts-whole dependence: Orderly systems have rich, decomposable, internal structure; specifically, parts play differential roles, and interactions are primarily local. Disorderly systems, in contrast, have a homogeneous internal structure, such that differences among parts and organizational features are less important. Orderliness, I suggest, marks one key difference between individuals and collectives.
My focus here will be the connection between order and robustness, i.e. functional resilience in the face of internal or environmental perturbations. I distinguish three varieties of robustness. Ordered robustness is grounded in the system’s specific organizational pattern. In contrast, disorderly robustness stems from the aggregate outcome of many similar parts. In between, we find semi-ordered robustness, wherein a messy ensemble of elements is subjected to a selection or stabilization mechanism. I give brief characterizations of each category, discuss examples and remark on the connection between the order/disorder axis and the notions of individual versus collective
Maladaptation and the paradox of robustness in evolution
Background. Organisms use a variety of mechanisms to protect themselves
against perturbations. For example, repair mechanisms fix damage, feedback
loops keep homeostatic systems at their setpoints, and biochemical filters
distinguish signal from noise. Such buffering mechanisms are often discussed in
terms of robustness, which may be measured by reduced sensitivity of
performance to perturbations. Methodology/Principal Findings. I use a
mathematical model to analyze the evolutionary dynamics of robustness in order
to understand aspects of organismal design by natural selection. I focus on two
characters: one character performs an adaptive task; the other character
buffers the performance of the first character against perturbations. Increased
perturbations favor enhanced buffering and robustness, which in turn decreases
sensitivity and reduces the intensity of natural selection on the adaptive
character. Reduced selective pressure on the adaptive character often leads to
a less costly, lower performance trait. Conclusions/Significance. The paradox
of robustness arises from evolutionary dynamics: enhanced robustness causes an
evolutionary reduction in the adaptive performance of the target character,
leading to a degree of maladaptation compared to what could be achieved by
natural selection in the absence of robustness mechanisms. Over evolutionary
time, buffering traits may become layered on top of each other, while the
underlying adaptive traits become replaced by cheaper, lower performance
components. The paradox of robustness has widespread implications for
understanding organismal design
- …