6,193 research outputs found
Path Similarity Analysis: a Method for Quantifying Macromolecular Pathways
Diverse classes of proteins function through large-scale conformational
changes; sophisticated enhanced sampling methods have been proposed to generate
these macromolecular transition paths. As such paths are curves in a
high-dimensional space, they have been difficult to compare quantitatively, a
prerequisite to, for instance, assess the quality of different sampling
algorithms. The Path Similarity Analysis (PSA) approach alleviates these
difficulties by utilizing the full information in 3N-dimensional trajectories
in configuration space. PSA employs the Hausdorff or Fr\'echet path
metrics---adopted from computational geometry---enabling us to quantify path
(dis)similarity, while the new concept of a Hausdorff-pair map permits the
extraction of atomic-scale determinants responsible for path differences.
Combined with clustering techniques, PSA facilitates the comparison of many
paths, including collections of transition ensembles. We use the closed-to-open
transition of the enzyme adenylate kinase (AdK)---a commonly used testbed for
the assessment enhanced sampling algorithms---to examine multiple microsecond
equilibrium molecular dynamics (MD) transitions of AdK in its substrate-free
form alongside transition ensembles from the MD-based dynamic importance
sampling (DIMS-MD) and targeted MD (TMD) methods, and a geometrical targeting
algorithm (FRODA). A Hausdorff pairs analysis of these ensembles revealed, for
instance, that differences in DIMS-MD and FRODA paths were mediated by a set of
conserved salt bridges whose charge-charge interactions are fully modeled in
DIMS-MD but not in FRODA. We also demonstrate how existing trajectory analysis
methods relying on pre-defined collective variables, such as native contacts or
geometric quantities, can be used synergistically with PSA, as well as the
application of PSA to more complex systems such as membrane transporter
proteins.Comment: 9 figures, 3 tables in the main manuscript; supplementary information
includes 7 texts (S1 Text - S7 Text) and 11 figures (S1 Fig - S11 Fig) (also
available from journal site
Quantifying the effect of interannual ocean variability on the attribution of extreme climate events to human influence
In recent years, the climate change research community has become highly
interested in describing the anthropogenic influence on extreme weather events,
commonly termed "event attribution." Limitations in the observational record
and in computational resources motivate the use of uncoupled,
atmosphere/land-only climate models with prescribed ocean conditions run over a
short period, leading up to and including an event of interest. In this
approach, large ensembles of high-resolution simulations can be generated under
factual observed conditions and counterfactual conditions that might have been
observed in the absence of human interference; these can be used to estimate
the change in probability of the given event due to anthropogenic influence.
However, using a prescribed ocean state ignores the possibility that estimates
of attributable risk might be a function of the ocean state. Thus, the
uncertainty in attributable risk is likely underestimated, implying an
over-confidence in anthropogenic influence.
In this work, we estimate the year-to-year variability in calculations of the
anthropogenic contribution to extreme weather based on large ensembles of
atmospheric model simulations. Our results both quantify the magnitude of
year-to-year variability and categorize the degree to which conclusions of
attributable risk are qualitatively affected. The methodology is illustrated by
exploring extreme temperature and precipitation events for the northwest coast
of South America and northern-central Siberia; we also provides results for
regions around the globe. While it remains preferable to perform a full
multi-year analysis, the results presented here can serve as an indication of
where and when attribution researchers should be concerned about the use of
atmosphere-only simulations
Capturing and viewing gigapixel images
We present a system to capture and view "Gigapixel images": very high resolution, high dynamic range, and wide angle imagery consisting of several billion pixels each. A specialized camera mount, in combination with an automated pipeline for alignment, exposure compensation, and stitching, provide the means to acquire Gigapixel images with a standard camera and lens. More importantly, our novel viewer enables exploration of such images at interactive rates over a network, while dynamically and smoothly interpolating the projection between perspective and curved projections, and simultaneously modifying the tone-mapping to ensure an optimal view of the portion of the scene being viewed.publishe
Sweating the small stuff: simulating dwarf galaxies, ultra-faint dwarf galaxies, and their own tiny satellites
We present FIRE/Gizmo hydrodynamic zoom-in simulations of isolated dark
matter halos, two each at the mass of classical dwarf galaxies () and ultra-faint galaxies (), and with two feedback implementations. The resultant central
galaxies lie on an extrapolated abundance matching relation from to without a break. Every host is filled with
subhalos, many of which form stars. Our dwarfs with each have 1-2 well-resolved satellites with . Even our isolated ultra-faint galaxies have
star-forming subhalos. If this is representative, dwarf galaxies throughout the
universe should commonly host tiny satellite galaxies of their own. We combine
our results with the ELVIS simulations to show that targeting regions around nearby isolated dwarfs could increase the chances of
discovering ultra-faint galaxies by compared to random halo
pointings, and specifically identify the region around the Phoenix dwarf galaxy
as a good potential target.
The well-resolved ultra-faint galaxies in our simulations () form within halos. Each has a uniformly ancient stellar population () owing to reionization-related quenching. More massive systems, in
contrast, all have late-time star formation. Our results suggest that is a probable dividing line between halos
hosting reionization "fossils" and those hosting dwarfs that can continue to
form stars in isolation after reionization.Comment: 12 pages, 6 figures, 1 table, submitted to MNRA
An evaluation of prospective motion correction (PMC) for high resolution quantitative MRI
Quantitative imaging aims to provide in vivo neuroimaging biomarkers with high research and diagnostic value that are sensitive to underlying tissue microstructure. In order to use these data to examine intra-cortical differences or to define boundaries between different myelo-architectural areas, high resolution data are required. The quality of such measurements is degraded in the presence of motion hindering insight into brain microstructure. Correction schemes are therefore vital for high resolution, whole brain coverage approaches that have long acquisition times and greater sensitivity to motion. Here we evaluate the use of prospective motion correction (PMC) via an optical tracking system to counter intra-scan motion in a high resolution (800 μm isotropic) multi-parameter mapping (MPM) protocol. Data were acquired on six volunteers using a 2 × 2 factorial design permuting the following conditions: PMC on/off and motion/no motion. In the presence of head motion, PMC-based motion correction considerably improved the quality of the maps as reflected by fewer visible artifacts and improved consistency. The precision of the maps, parameterized through the coefficient of variation in cortical sub-regions, showed improvements of 11–25% in the presence of deliberate head motion. Importantly, in the absence of motion the PMC system did not introduce extraneous artifacts into the quantitative maps. The PMC system based on optical tracking offers a robust approach to minimizing motion artifacts in quantitative anatomical imaging without extending scan times. Such a robust motion correction scheme is crucial in order to achieve the ultra-high resolution required of quantitative imaging for cutting edge in vivo histology applications
Selected committee reports
Committee reports were from the Education Committee, Research Task Force for the 1991 Conference, Accreditation and Class Coverage of Accounting History task force, reports on Garner monograph and December conference, American Research Committee, editor\u27s report for the Accounting Historians Journal, and the International Research Committee
SIDM on FIRE: Hydrodynamical Self-Interacting Dark Matter simulations of low-mass dwarf galaxies
We compare a suite of four simulated dwarf galaxies formed in 10 haloes of collisionless Cold Dark Matter (CDM) with galaxies
simulated in the same haloes with an identical galaxy formation model but a
non-zero cross-section for dark matter self-interactions. These cosmological
zoom-in simulations are part of the Feedback In Realistic Environments (FIRE)
project and utilize the FIRE-2 model for hydrodynamics and galaxy formation
physics. We find the stellar masses of the galaxies formed in Self-Interacting
Dark Matter (SIDM) with are very similar to those in CDM
(spanning ) and all runs lie on a
similar stellar mass -- size relation. The logarithmic dark matter density
slope () in the central pc remains
steeper than for the CDM-Hydro simulations with stellar mass
and core-like in the most massive galaxy.
In contrast, every SIDM hydrodynamic simulation yields a flatter profile, with
. Moreover, the central density profiles predicted in SIDM runs
without baryons are similar to the SIDM runs that include FIRE-2 baryonic
physics. Thus, SIDM appears to be much more robust to the inclusion of
(potentially uncertain) baryonic physics than CDM on this mass scale,
suggesting SIDM will be easier to falsify than CDM using low-mass galaxies. Our
FIRE simulations predict that galaxies less massive than provide potentially ideal targets for discriminating models,
with SIDM producing substantial cores in such tiny galaxies and CDM producing
cusps.Comment: 10 Pages, 7 figures, submitted to MNRA
Intraoperative electrocochleographic characteristics of auditory neuropathy spectrum disorder in cochlear implant subjects
Auditory neuropathy spectrum disorder (ANSD) is characterized by an apparent discrepancy between measures of cochlear and neural function based on auditory brainstem response (ABR) testing. Clinical indicators of ANSD are a present cochlear microphonic (CM) with small or absent wave V. Many identified ANSD patients have speech impairment severe enough that cochlear implantation (CI) is indicated. To better understand the cochleae identified with ANSD that lead to a CI, we performed intraoperative round window electrocochleography (ECochG) to tone bursts in children (n = 167) and adults (n = 163). Magnitudes of the responses to tones of different frequencies were summed to measure the “total response” (ECochG-TR), a metric often dominated by hair cell activity, and auditory nerve activity was estimated visually from the compound action potential (CAP) and auditory nerve neurophonic (ANN) as a ranked “Nerve Score”. Subjects identified as ANSD (45 ears in children, 3 in adults) had higher values of ECochG-TR than adult and pediatric subjects also receiving CIs not identified as ANSD. However, nerve scores of the ANSD group were similar to the other cohorts, although dominated by the ANN to low frequencies more than in the non-ANSD groups. To high frequencies, the common morphology of ANSD cases was a large CM and summating potential, and small or absent CAP. Common morphologies in other groups were either only a CM, or a combination of CM and CAP. These results indicate that responses to high frequencies, derived primarily from hair cells, are the main source of the CM used to evaluate ANSD in the clinical setting. However, the clinical tests do not capture the wide range of neural activity seen to low frequency sounds
- …