2,996 research outputs found
The size of the nucleosome
The structural origin of the size of the 11 nm nucleosomal disc is addressed.
On the nanometer length-scale the organization of DNA as chromatin in the
chromosomes involves a coiling of DNA around the histone core of the
nucleosome. We suggest that the size of the nucleosome core particle is
dictated by the fulfillment of two criteria: One is optimizing the volume
fraction of the DNA double helix; this requirement for close-packing has its
root in optimizing atomic and molecular interactions. The other criterion being
that of having a zero strain-twist coupling; being a zero-twist structure is a
necessity when allowing for transient tensile stresses during the
reorganization of DNA, e.g., during the reposition, or sliding, of a nucleosome
along the DNA double helix. The mathematical model we apply is based on a
tubular description of double helices assuming hard walls. When the base-pairs
of the linker-DNA is included the estimate of the size of an ideal nucleosome
is in close agreement with the experimental numbers. Interestingly, the size of
the nucleosome is shown to be a consequence of intrinsic properties of the DNA
double helix.Comment: 11 pages, 5 figures; v2: minor modification
Sheep-breeding
In the remarks on sheep-breeding which I am about to submit
to you, I must beg you to understand that I do not
profess to be able to offer you the results of any experiments
of my own, nor any theory founded on the experiments of
others.
I cannot find, indeed, that any experiments have ever been
made upon any scientific principle, and upon such a scale as
to arrive at any defined and certain laws, such as must underlie
and govern the science of artificial selection, whilst on reference
to those authorities who have written on the subject, I
find discordancies of opinion, coupled with vagueness of
technical phraseology, that must leave every one in doubt as
to whether indeed we do know scientifically more of breeding
now than we did one hundred years ago
Opportunities and obstacles for deep learning in biology and medicine
Deep learning describes a class of machine learning algorithms that are capable of combining raw inputs into layers of intermediate features. These algorithms have recently shown impressive results across a variety of domains. Biology and medicine are data-rich disciplines, but the data are complex and often ill-understood. Hence, deep learning techniques may be particularly well suited to solve problems of these fields. We examine applications of deep learning to a variety of biomedical problems-patient classification, fundamental biological processes and treatment of patients-and discuss whether deep learning will be able to transform these tasks or if the biomedical sphere poses unique challenges. Following from an extensive literature review, we find that deep learning has yet to revolutionize biomedicine or definitively resolve any of the most pressing challenges in the field, but promising advances have been made on the prior state of the art. Even though improvements over previous baselines have been modest in general, the recent progress indicates that deep learning methods will provide valuable means for speeding up or aiding human investigation. Though progress has been made linking a specific neural network\u27s prediction to input features, understanding how users should interpret these models to make testable hypotheses about the system under study remains an open challenge. Furthermore, the limited amount of labelled data for training presents problems in some domains, as do legal and privacy constraints on work with sensitive health records. Nonetheless, we foresee deep learning enabling changes at both bench and bedside with the potential to transform several areas of biology and medicine
Base sequence dependent sliding of proteins on DNA
The possibility that the sliding motion of proteins on DNA is influenced by
the base sequence through a base pair reading interaction, is considered.
Referring to the case of the T7 RNA-polymerase, we show that the protein should
follow a noise-influenced sequence-dependent motion which deviate from the
standard random walk usually assumed. The general validity and the implications
of the results are discussed.Comment: 12 pages, 3 figure
Memory, learning and language in autism spectrum disorder
Background and aims: The ‘dual-systems’ model of language acquisition has been used by Ullman and colleagues to explain patterns of strength and weakness in the language of higher-functioning people with autism spectrum disorder (ASD). Specifically, intact declarative/explicit learning is argued to compensate for a deficit in non-declarative/implicit procedural learning, constituting an example of the so-called ‘see-saw’ effect. Ullman and Pullman (2015) extended their argument concerning a see-saw effect on language in ASD to cover other perceived anomalies of behaviour, including impaired acquisition of social skills. The aim of this paper is to present a critique of Ullman and colleagues’ claims, and to propose an alternative model of links between memory systems and language in ASD.
Main contribution: We argue that a 4-systems model of learning, in which intact semantic and procedural memory are used to compensate for weaknesses in episodic memory and perceptual learning, can better explain patterns of language ability across the autistic spectrum. We also argue that attempts to generalise the ‘impaired implicit learning/spared declarative learning’ theory to other behaviours in ASD are unsustainable.
Conclusions: Clinically significant language impairments in ASD are under-researched, despite their impact on everyday functioning and quality of life. The relative paucity of research findings in this area lays it open to speculative interpretation which may be misleading.
Implications: More research is need into links between memory/learning systems and language impairments across the spectrum. Improved understanding should inform therapeutic intervention, and contribute to investigation of the causes of language impairment in ASD with potential implications for prevention
Force Distribution in a Granular Medium
We report on systematic measurements of the distribution of normal forces
exerted by granular material under uniaxial compression onto the interior
surfaces of a confining vessel. Our experiments on three-dimensional, random
packings of monodisperse glass beads show that this distribution is nearly
uniform for forces below the mean force and decays exponentially for forces
greater than the mean. The shape of the distribution and the value of the
exponential decay constant are unaffected by changes in the system preparation
history or in the boundary conditions. An empirical functional form for the
distribution is proposed that provides an excellent fit over the whole force
range measured and is also consistent with recent computer simulation data.Comment: 6 pages. For more information, see http://mrsec.uchicago.edu/granula
Superconducting Films for Absorber-Coupled MKID Detectors for Sub-Millimeter and Far-Infrared Astronomy
We describe measurements of the properties, at dc, gigahertz, and terahertz frequencies, of thin (10 nm) aluminum films with 10 ohm/{rm square}$ normal state sheet resistance. Such films can be applied to construct microwave kinetic inductance detector arrays for submillimeter and far-infrared astronomical applications in which incident power excites quasiparticles directly in a superconducting resonator that is configured to present a matched-impedance to the high frequency radiation being detected. For films 10 nm thick, we report normal state sheet resistance, resistance-temperature curves for the superconducting transition, quality factor and kinetic inductance fraction for microwave resonators made from patterned films, and terahertz measurements of sheet impedance measured with a Fourier Transform Spectrometer. We compare properties with similar resonators made from niobium 600 nm thick
Weighted distances in scale-free preferential attachment models
We study three preferential attachment models where the parameters are such
that the asymptotic degree distribution has infinite variance. Every edge is
equipped with a non-negative i.i.d. weight. We study the weighted distance
between two vertices chosen uniformly at random, the typical weighted distance,
and the number of edges on this path, the typical hopcount. We prove that there
are precisely two universality classes of weight distributions, called the
explosive and conservative class. In the explosive class, we show that the
typical weighted distance converges in distribution to the sum of two i.i.d.
finite random variables. In the conservative class, we prove that the typical
weighted distance tends to infinity, and we give an explicit expression for the
main growth term, as well as for the hopcount. Under a mild assumption on the
weight distribution the fluctuations around the main term are tight.Comment: Revised version, results are unchanged. 30 pages, 1 figure. To appear
in Random Structures and Algorithm
Scale-free networks with tunable degree distribution exponents
We propose and study a model of scale-free growing networks that gives a
degree distribution dominated by a power-law behavior with a model-dependent,
hence tunable, exponent. The model represents a hybrid of the growing networks
based on popularity-driven and fitness-driven preferential attachments. As the
network grows, a newly added node establishes new links to existing nodes
with a probability based on popularity of the existing nodes and a
probability based on fitness of the existing nodes. An explicit form of
the degree distribution is derived within a mean field approach. For
reasonably large , , where the
function is dominated by the behavior of for small
values of and becomes -independent as , and is a
model-dependent exponent. The degree distribution and the exponent
are found to be in good agreement with results obtained by extensive numerical
simulations.Comment: 12 pages, 2 figures, submitted to PR
- …