3,244 research outputs found
Optimization of electron microscopy for human brains with long-term fixation and fixed-frozen sections.
BackgroundAbnormal connectivity across brain regions underlies many neurological disorders including multiple sclerosis, schizophrenia and autism, possibly due to atypical axonal organization within white matter. Attempts at investigating axonal organization on post-mortem human brains have been hindered by the availability of high-quality, morphologically preserved tissue, particularly for neurodevelopmental disorders such as autism. Brains are generally stored in a fixative for long periods of time (often greater than 10 years) and in many cases, already frozen and sectioned on a microtome for histology and immunohistochemistry. Here we present a method to assess the quality and quantity of axons from long-term fixed and frozen-sectioned human brain samples to demonstrate their use for electron microscopy (EM) measures of axonal ultrastructure.ResultsSix samples were collected from white matter below the superior temporal cortex of three typically developing human brains and prepared for EM analyses. Five samples were stored in fixative for over 10 years, two of which were also flash frozen and sectioned on a freezing microtome, and one additional case was fixed for 3 years and sectioned on a freezing microtome. In all six samples, ultrastructural qualitative and quantitative analyses demonstrate that myelinated axons can be identified and counted on the EM images. Although axon density differed between brains, axonal ultrastructure and density was well preserved and did not differ within cases for fixed and frozen tissue. There was no significant difference between cases in axon myelin sheath thickness (g-ratio) or axon diameter; approximately 70% of axons were in the small (0.25 μm) to medium (0.75 μm) range. Axon diameter and g-ratio were positively correlated, indicating that larger axons may have thinner myelin sheaths.ConclusionThe current study demonstrates that long term formalin fixed and frozen-sectioned human brain tissue can be used for ultrastructural analyses. Axon integrity is well preserved and can be quantified using the methods presented here. The ability to carry out EM on frozen sections allows for investigation of axonal organization in conjunction with other cellular and histological methods, such as immunohistochemistry and stereology, within the same brain and even within the same frozen cut section
Hidden horizons in non-relativistic AdS/CFT
We study boundary Green's functions for spacetimes with non-relativistic
scaling symmetry. For this class of backgrounds, scalar modes with large
transverse momentum, or equivalently low frequency, have an exponentially
suppressed imprint on the boundary. We investigate the effect of these modes on
holographic two-point functions. We find that the boundary Green's function is
generically insensitive to horizon features on small transverse length scales.
We explicitly demonstrate this insensitivity for Lifshitz z=2, and then use the
WKB approximation to generalize our findings to Lifshitz z>1 and RG flows with
a Lifshitz-like region. We also comment on the analogous situation in
Schroedinger spacetimes. Finally, we exhibit the analytic properties of the
Green's function in these spacetimes.Comment: Abstract and Introduction updated, typos correcte
Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions
Deep neural networks are widely used for classification. These deep models
often suffer from a lack of interpretability -- they are particularly difficult
to understand because of their non-linear nature. As a result, neural networks
are often treated as "black box" models, and in the past, have been trained
purely to optimize the accuracy of predictions. In this work, we create a novel
network architecture for deep learning that naturally explains its own
reasoning for each prediction. This architecture contains an autoencoder and a
special prototype layer, where each unit of that layer stores a weight vector
that resembles an encoded training input. The encoder of the autoencoder allows
us to do comparisons within the latent space, while the decoder allows us to
visualize the learned prototypes. The training objective has four terms: an
accuracy term, a term that encourages every prototype to be similar to at least
one encoded input, a term that encourages every encoded input to be close to at
least one prototype, and a term that encourages faithful reconstruction by the
autoencoder. The distances computed in the prototype layer are used as part of
the classification process. Since the prototypes are learned during training,
the learned network naturally comes with explanations for each prediction, and
the explanations are loyal to what the network actually computes.Comment: The first two authors contributed equally, 8 pages, accepted in AAAI
201
Universal features of Lifshitz Green's functions from holography
We examine the behavior of the retarded Green's function in theories with
Lifshitz scaling symmetry, both through dual gravitational models and a direct
field theory approach. In contrast with the case of a relativistic CFT, where
the Green's function is fixed (up to normalization) by symmetry, the generic
Lifshitz Green's function can a priori depend on an arbitrary function
, where is the
scale-invariant ratio of frequency to wavenumber, with dynamical exponent .
Nevertheless, we demonstrate that the imaginary part of the retarded Green's
function (i.e. the spectral function) of scalar operators is exponentially
suppressed in a window of frequencies near zero. This behavior is universal in
all Lifshitz theories without additional constraining symmetries. On the
gravity side, this result is robust against higher derivative corrections,
while on the field theory side we present two examples where the
exponential suppression arises from summing the perturbative expansion to
infinite order.Comment: 32 pages, 4 figures, v2: reference added, v3: fixed bug in
bibliograph
Recommended from our members
Requiring Individuals to Obtain Health Insurance: A Constitutional Analysis
[Excerpt] This report analyzes certain constitutional issues raised by requiring individuals to purchase health insurance under Congress’s authority under its taxing power or its power to regulate interstate commerce. It also addresses whether the exceptions to the minimum coverage provision to purchase health insurance satisfy First Amendment freedom of religion protections. Finally, this report discusses some of the more publicized legal challenges to ACA, as well additional issues that are currently before the Court
Algorithmic Fairness from a Non-ideal Perspective
Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a fair world and offered a variety of algorithms in attempts to satisfy subsets of these parities or to trade o the degree to which they are satised against utility. In this paper, we connect this approach to fair machine learning to the literature on ideal and non-ideal methodological approaches in political philosophy. The ideal approach requires positing the principles according to which a just world would operate. In the most straightforward application of ideal theory, one supports a proposed policy by arguing that it closes a discrepancy between the real and the perfectly just world. However, by failing to account for the mechanisms by which our non-ideal world arose, the responsibilities of various decision-makers, and the impacts of proposed policies, naive applications of ideal thinking can lead to misguided interventions. In this paper, we demonstrate a connection between the fair machine learning literature and the ideal approach in political philosophy, and argue that the increasingly apparent shortcomings of proposed fair machine learning algorithms reflect broader troubles
faced by the ideal approach. We conclude with a critical discussion of the harms of misguided solutions, a
reinterpretation of impossibility results, and directions for future researc
- …