224 research outputs found
Probing the partonic structure of pentaquarks in hard electroproduction
Exclusive electroproduction of a K or K* meson on the nucleon can give a
Theta+ pentaquark in the final state. This reaction offers an opportunity to
investigate the structure of pentaquark baryons at parton level. We discuss the
generalized parton distributions for the N-->Theta+ transition and give the
leading order amplitude for electroproduction in the Bjorken regime. Different
production channels contain complementary information about the distribution of
partons in a pentaquark compared with their distribution in the nucleon.
Measurement of these processes may thus provide deeper insight into the very
nature of pentaquarks.Comment: 17 pages, 8 figures. v2: minor clarifications, references adde
Role of spheroidal particles in closure studies for aerosol microphysicalâoptical properties
"This is the peer reviewed version of the following article: Sorribas, M.; et al. Role of spheroidal particles in closure studies for aerosol microphysical-optical properties. Quarterly Journal of the Royal Meteorological Society, 141(692): 2700-2707 (2015), which has been published in final form at http://dx.doi.org/10.1002/qj.2557 . This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving."A study has been carried out to assess the discrepancies between computed and observed aerosol scattering and backscattering properties in the atmosphere. The goals were: (i) to analyse the uncertainty associated with computed optical properties when spherical and spheroidal approximations are used, and (ii) to estimate nephelometry errors due to angular truncation and non-Lambertian illumination of the light source in terms of size range, particle shape and aerosol chemical compounds. Mie and T-matrix theories were used for computing light optical properties for spherical and spheroidal particles, respectively, from observed particle size distributions. The scattering coefficient of the fine mode was not much influenced by the particle shape. However, computed backscattering values underestimated the observed values by âŒ15%. For the coarse mode, the spheroidal approximation yielded better results than that for spherical particles, especially for backscattering properties. Even after applying the spheroidal approximation, computed scattering and backscattering values within the coarse mode underestimated the observed values by âŒ49% and âŒ11%, respectively. The angular correction most widely used to correct the nephelometer data was discussed to explore its uncertainty. In the case of the scattering properties within the coarse mode, the change of the computed optical parameter is âŒ+8% and for the scattering and backscattering values within the fine mode it is lower than âŒÂ±4% for spherical and spheroidal particles. Additionally, if the spheroidal particles are used to evaluate the aerosol optical properties, the correction must be reconsidered with the aim of reducing the uncertainty found for scattering within the coarse mode. This is recommended for sites with desert dust influence; then the deviation of the computed scattering can be up to 13%.M. Sorribas thanks MINECO for the award of a postdoctoral grant (Juan de la Cierva).This work was partially supported by the Andalusian Regional Government through projects P10-RNM-6299 and P12-RNM-2409, by the Spanish Ministry of Science and Technology through projects CGL2010-18782, CGL2011-24891/CLI and CGL2013-45410-R.EU through ACTRIS project (EU INFRA-2010-1.1.16-262254)
Predicting a small molecule-kinase interaction map: A machine learning approach
<p>Abstract</p> <p>Background</p> <p>We present a machine learning approach to the problem of protein ligand interaction prediction. We focus on a set of binding data obtained from 113 different protein kinases and 20 inhibitors. It was attained through ATP site-dependent binding competition assays and constitutes the first available dataset of this kind. We extract information about the investigated molecules from various data sources to obtain an informative set of features.</p> <p>Results</p> <p>A Support Vector Machine (SVM) as well as a decision tree algorithm (C5/See5) is used to learn models based on the available features which in turn can be used for the classification of new kinase-inhibitor pair test instances. We evaluate our approach using different feature sets and parameter settings for the employed classifiers. Moreover, the paper introduces a new way of evaluating predictions in such a setting, where different amounts of information about the binding partners can be assumed to be available for training. Results on an external test set are also provided.</p> <p>Conclusions</p> <p>In most of the cases, the presented approach clearly outperforms the baseline methods used for comparison. Experimental results indicate that the applied machine learning methods are able to detect a signal in the data and predict binding affinity to some extent. For SVMs, the binding prediction can be improved significantly by using features that describe the active site of a kinase. For C5, besides diversity in the feature set, alignment scores of conserved regions turned out to be very useful.</p
Deep exclusive electroproduction off the proton at CLAS
The exclusive electroproduction of above the resonance region was
studied using the Large Acceptance Spectrometer () at
Jefferson Laboratory by scattering a 6 GeV continuous electron beam off a
hydrogen target. The large acceptance and good resolution of ,
together with the high luminosity, allowed us to measure the cross section for
the process in 140 (, , ) bins:
, 1.6 GeV GeV and 0.1 GeV
GeV. For most bins, the statistical accuracy is on the order of a few
percent. Differential cross sections are compared to two theoretical models,
based either on hadronic (Regge phenomenology) or on partonic (handbag diagram)
degrees of freedom. Both can describe the gross features of the data reasonably
well, but differ strongly in their ingredients. If the handbag approach can be
validated in this kinematical region, our data contain the interesting
potential to experimentally access transversity Generalized Parton
Distributions.Comment: 18pages, 21figures,2table
Multiscale modelling methods in biomechanics
More and more frequently, computational biomechanics deals with problems where the portion of
physical reality to be modelled spans over such a large range of spatial and temporal dimensions,
that it is impossible to represent it as a single space-time continuum. We are forced to consider
multiple space-time continua, each representing the phenomenon of interest at a characteristic
space-time scale. Multiscale models describe a complex process across multiple scales, and account
for how quantities transform as we move from one scale to another. This review offers a set of
definitions for this emerging field, and provides a brief summary of the most recent developments
on multiscale modelling in biomechanics. Of all possible perspectives, we chose that of the modelling
intent, which vastly affect the nature and the structure of each research activity. To the purpose we
organised all papers reviewed in three categories: ăcausal confirmationă, where multiscale models
are used as materialisations of the causation theories; ăpredictive accuracyă, where multiscale
modelling is aimed to improve the predictive accuracy; and ădetermination of effectă, where
multiscale modelling is used to model how a change at one scale manifest in an effect at another,
radically different space-time scale. Consistently with the how the volume of computational
biomechanics research is distributed across application targets, we extensively reviewed papers
targeting the musculoskeletal and the cardiovascular system, and covered only a few exemplary
papers targeting other organ systems. The review shows a research sub-domain still in its infancy,
where causal confirmation papers remain the most common
Mechanism for modulation of gating of connexin26-containing channels by taurine
The mechanisms of action of endogenous modulatory ligands of connexin channels are largely unknown. Previous work showed that protonated aminosulfonates (AS), notably taurine, directly and reversibly inhibit homomeric and heteromeric channels that contain Cx26, a widely distributed connexin, but not homomeric Cx32 channels. The present study investigated the molecular mechanisms of connexin channel modulation by taurine, using hemichannels and junctional channels composed of Cx26 (homomeric) and Cx26/Cx32 (heteromeric). The addition of a 28âamino acid âtagâ to the carboxyl-terminal domain (CT) of Cx26 (Cx26T) eliminated taurine sensitivity of homomeric and heteromeric hemichannels in cells and liposomes. Cleavage of all but four residues of the tag (Cx26Tc) resulted in taurine-induced pore narrowing in homomeric hemichannels, and restored taurine inhibition of heteromeric hemichannels (Cx26Tc/Cx32). Taurine actions on junctional channels were fully consistent with those on hemichannels. Taurine-induced inhibition of Cx26/Cx32T and nontagged Cx26 junctional channels was blocked by extracellular HEPES, a blocker of the taurine transporter, confirming that the taurine-sensitive site of Cx26 is cytoplasmic. Nuclear magnetic resonance of peptides corresponding to Cx26 cytoplasmic domains showed that taurine binds to the cytoplasmic loop (CL) and not the CT, and that the CT and CL directly interact. ELISA showed that taurine disrupts a pH-dependent interaction between the CT and the CT-proximal half of the CL. These studies reveal that AS disrupt a pH-driven cytoplasmic interdomain interaction in Cx26-containing channels, causing closure, and that the Cx26CT has a modulatory role in Cx26 function
Global solutions to regional problems: Collecting global expertise to address the problem of harmful cyanobacterial blooms. A Lake Erie case study
In early August 2014, the municipality of Toledo, OH (USA) issued a âdo not drinkâ advisory on their water supply directly affecting over 400,000 residential customers and hundreds of businesses (Wilson, 2014). This order was attributable to levels of microcystin, a potent liver toxin, which rose to 2.5 mg L1 in finished drinking water. The Toledo crisis afforded an opportunity to bring together scientists from around the world to share ideas regarding factors that contribute to bloom formation and toxigenicity, bloom and toxin detection as well as prevention and remediation of bloom events. These discussions took place at an NSF- and NOAA-sponsored workshop at Bowling Green State University on April 13 and 14, 2015. In all, more than 100 attendees from six countries and 15 US states gathered together to share their perspectives. The purpose of this review is to present the consensus summary of these issues that emerged from discussions at the Workshop. As additional reports in this special issue provide detailed reviews on many major CHAB species, this paper focuses on the general themes common to all blooms, such as bloom detection, modeling, nutrient loading, and strategies to reduce nutrients
Assessing EEG neuroimaging with machine learning
Neuroimaging techniques can give novel insights into the nature of human cognition.
We do not wish only to label patterns of activity as potentially associated with a
cognitive process, but also to probe this in detail, so as to better examine how it may
inform mechanistic theories of cognition. A possible approach towards this goal is to
extend EEG 'brain-computer interface' (BCI) tools - where motor movement intent is
classified from brain activity - to also investigate visual cognition experiments.
We hypothesised that, building on BCI techniques, information from visual object
tasks could be classified from EEG data. This could allow novel experimental designs
to probe visual information processing in the brain. This can be tested and falsified by
application of machine learning algorithms to EEG data from a visual experiment, and
quantified by scoring the accuracy at which trials can be correctly classified.
Further, we hypothesise that ICA can be used for source-separation of EEG data to
produce putative activity patterns associated with visual process mechanisms. Detailed
profiling of these ICA sources could be informative to the nature of visual cognition in
a way that is not accessible through other means. While ICA has been used previously
in removing 'noise' from EEG data, profiling the relation of common ICA sources to
cognitive processing appears less well explored. This can be tested and falsified by using
ICA sources as training data for the machine learning, and quantified by scoring the
accuracy at which trials can be correctly classified using this data, while also comparing
this with the equivalent EEG data.
We find that machine learning techniques can classify the presence or absence of
visual stimuli at 85% accuracy (0.65 AUC) using a single optimised channel of EEG
data, and this improves to 87% (0.7 AUC) using data from an equivalent single ICA
source. We identify data from this ICA source at time period around 75-125 ms
post-stimuli presentation as greatly more informative in decoding the trial label. The
most informative ICA source is located in the central occipital region and typically has
prominent 10-12Hz synchrony and a -5 ÎŒV ERP dip at around 100ms. This appears to
be the best predictor of trial identity in our experiment.
With these findings, we then explore further experimental designs to investigate
ongoing visual attention and perception, attempting online classification of vision using
these techniques and IC sources. We discuss how these relate to standard EEG
landmarks such as the N170 and P300, and compare their use. With this thesis, we
explore this methodology of quantifying EEG neuroimaging data with machine learning
separation and classification and discuss how this can be used to investigate visual
cognition. We hope the greater information from EEG analyses with predictive power
of each ICA source quantified by machine learning separation and classification and discuss how this can be used to investigate visual
cognition. We hope the greater information from EEG analyses with predictive power
of each ICA source quantified by machine learning might give insight and constraints
for macro level models of visual cognition
Self-oscillation
Physicists are very familiar with forced and parametric resonance, but
usually not with self-oscillation, a property of certain dynamical systems that
gives rise to a great variety of vibrations, both useful and destructive. In a
self-oscillator, the driving force is controlled by the oscillation itself so
that it acts in phase with the velocity, causing a negative damping that feeds
energy into the vibration: no external rate needs to be adjusted to the
resonant frequency. The famous collapse of the Tacoma Narrows bridge in 1940,
often attributed by introductory physics texts to forced resonance, was
actually a self-oscillation, as was the swaying of the London Millennium
Footbridge in 2000. Clocks are self-oscillators, as are bowed and wind musical
instruments. The heart is a "relaxation oscillator," i.e., a non-sinusoidal
self-oscillator whose period is determined by sudden, nonlinear switching at
thresholds. We review the general criterion that determines whether a linear
system can self-oscillate. We then describe the limiting cycles of the simplest
nonlinear self-oscillators, as well as the ability of two or more coupled
self-oscillators to become spontaneously synchronized ("entrained"). We
characterize the operation of motors as self-oscillation and prove a theorem
about their limit efficiency, of which Carnot's theorem for heat engines
appears as a special case. We briefly discuss how self-oscillation applies to
servomechanisms, Cepheid variable stars, lasers, and the macroeconomic business
cycle, among other applications. Our emphasis throughout is on the energetics
of self-oscillation, often neglected by the literature on nonlinear dynamical
systems.Comment: 68 pages, 33 figures. v4: Typos fixed and other minor adjustments. To
appear in Physics Report
Unraveling hadron structure with generalized parton distributions
The generalized parton distributions, introduced nearly a decade ago, have
emerged as a universal tool to describe hadrons in terms of quark and gluonic
degrees of freedom. They combine the features of form factors, parton densities
and distribution amplitudes--the functions used for a long time in studies of
hadronic structure. Generalized parton distributions are analogous to the
phase-space Wigner quasi-probability function of non-relativistic quantum
mechanics which encodes full information on a quantum-mechanical system. We
give an extensive review of main achievements in the development of this
formalism. We discuss physical interpretation and basic properties of
generalized parton distributions, their modeling and QCD evolution in the
leading and next-to-leading orders. We describe how these functions enter a
wide class of exclusive reactions, such as electro- and photo-production of
photons, lepton pairs, or mesons. The theory of these processes requires and
implies full control over diverse corrections and thus we outline the progress
in handling higher-order and higher-twist effects. We catalogue corresponding
results and present diverse techniques for their derivations. Subsequently, we
address observables that are sensitive to different characteristics of the
nucleon structure in terms of generalized parton distributions. The ultimate
goal of the GPD approach is to provide a three-dimensional spatial picture of
the nucleon, direct measurement of the quark orbital angular momentum, and
various inter- and multi-parton correlations.Comment: 370 pages, 62 figures; Dedicated to Anatoly V. Efremov on occasion of
his 70th anniversar
- âŠ