19,555 research outputs found
Breeding for quantitative variables. Part 4: Breeding for nutritional quality traits
Yusuf Genc, Julia M. Humphries, Graham H. Lyons and Robin D. Grahamhttp://www.fao.org/docrep/012/i1070e/i1070e00.ht
Recommended from our members
MCTP is an ER-resident calcium sensor that stabilizes synaptic transmission and homeostatic plasticity.
Presynaptic homeostatic plasticity (PHP) controls synaptic transmission in organisms from Drosophila to human and is hypothesized to be relevant to the cause of human disease. However, the underlying molecular mechanisms of PHP are just emerging and direct disease associations remain obscure. In a forward genetic screen for mutations that block PHP we identified mctp (Multiple C2 Domain Proteins with Two Transmembrane Regions). Here we show that MCTP localizes to the membranes of the endoplasmic reticulum (ER) that elaborate throughout the soma, dendrites, axon and presynaptic terminal. Then, we demonstrate that MCTP functions downstream of presynaptic calcium influx with separable activities to stabilize baseline transmission, short-term release dynamics and PHP. Notably, PHP specifically requires the calcium coordinating residues in each of the three C2 domains of MCTP. Thus, we propose MCTP as a novel, ER-localized calcium sensor and a source of calcium-dependent feedback for the homeostatic stabilization of neurotransmission
Universal localisations and tilting modules for finite dimensional algebras
We study universal localisations, in the sense of Cohn and Schofield, for
finite dimensional algebras and classify them by certain subcategories of our
initial module category. A complete classification is presented in the
hereditary case as well as for Nakayama algebras and local algebras.
Furthermore, for hereditary algebras, we establish a correspondence between
finite dimensional universal localisations and finitely generated support
tilting modules. In the Nakayama case, we get a similar result using
-tilting modules, which were recently introduced by Adachi, Iyama and
Reiten
GAN-based Virtual Re-Staining: A Promising Solution for Whole Slide Image Analysis
Histopathological cancer diagnosis is based on visual examination of stained
tissue slides. Hematoxylin and eosin (H\&E) is a standard stain routinely
employed worldwide. It is easy to acquire and cost effective, but cells and
tissue components show low-contrast with varying tones of dark blue and pink,
which makes difficult visual assessments, digital image analysis, and
quantifications. These limitations can be overcome by IHC staining of target
proteins of the tissue slide. IHC provides a selective, high-contrast imaging
of cells and tissue components, but their use is largely limited by a
significantly more complex laboratory processing and high cost. We proposed a
conditional CycleGAN (cCGAN) network to transform the H\&E stained images into
IHC stained images, facilitating virtual IHC staining on the same slide. This
data-driven method requires only a limited amount of labelled data but will
generate pixel level segmentation results. The proposed cCGAN model improves
the original network \cite{zhu_unpaired_2017} by adding category conditions and
introducing two structural loss functions, which realize a multi-subdomain
translation and improve the translation accuracy as well. % need to give
reasons here. Experiments demonstrate that the proposed model outperforms the
original method in unpaired image translation with multi-subdomains. We also
explore the potential of unpaired images to image translation method applied on
other histology images related tasks with different staining techniques
Using Photorealistic Face Synthesis and Domain Adaptation to Improve Facial Expression Analysis
Cross-domain synthesizing realistic faces to learn deep models has attracted
increasing attention for facial expression analysis as it helps to improve the
performance of expression recognition accuracy despite having small number of
real training images. However, learning from synthetic face images can be
problematic due to the distribution discrepancy between low-quality synthetic
images and real face images and may not achieve the desired performance when
the learned model applies to real world scenarios. To this end, we propose a
new attribute guided face image synthesis to perform a translation between
multiple image domains using a single model. In addition, we adopt the proposed
model to learn from synthetic faces by matching the feature distributions
between different domains while preserving each domain's characteristics. We
evaluate the effectiveness of the proposed approach on several face datasets on
generating realistic face images. We demonstrate that the expression
recognition performance can be enhanced by benefiting from our face synthesis
model. Moreover, we also conduct experiments on a near-infrared dataset
containing facial expression videos of drivers to assess the performance using
in-the-wild data for driver emotion recognition.Comment: 8 pages, 8 figures, 5 tables, accepted by FG 2019. arXiv admin note:
substantial text overlap with arXiv:1905.0028
Learn to synthesize and synthesize to learn
Attribute guided face image synthesis aims to manipulate attributes on a face
image. Most existing methods for image-to-image translation can either perform
a fixed translation between any two image domains using a single attribute or
require training data with the attributes of interest for each subject.
Therefore, these methods could only train one specific model for each pair of
image domains, which limits their ability in dealing with more than two
domains. Another disadvantage of these methods is that they often suffer from
the common problem of mode collapse that degrades the quality of the generated
images. To overcome these shortcomings, we propose attribute guided face image
generation method using a single model, which is capable to synthesize multiple
photo-realistic face images conditioned on the attributes of interest. In
addition, we adopt the proposed model to increase the realism of the simulated
face images while preserving the face characteristics. Compared to existing
models, synthetic face images generated by our method present a good
photorealistic quality on several face datasets. Finally, we demonstrate that
generated facial images can be used for synthetic data augmentation, and
improve the performance of the classifier used for facial expression
recognition.Comment: Accepted to Computer Vision and Image Understanding (CVIU
- âŚ