239 research outputs found
Evolution and Classification of Myosins, a Paneukaryotic Whole-Genome Approach
notes: PubMed ID: 24443438© The Author(s) 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.Myosins are key components of the eukaryotic cytoskeleton, providing motility for a broad diversity of cargoes. Therefore, understanding the origin and evolutionary history of myosin classes is crucial to address the evolution of eukaryote cell biology. Here, we revise the classification of myosins using an updated taxon sampling that includes newly or recently sequenced genomes and transcriptomes from key taxa. We performed a survey of eukaryotic genomes and phylogenetic analyses of the myosin gene family, reconstructing the myosin toolkit at different key nodes in the eukaryotic tree of life. We also identified the phylogenetic distribution of myosin diversity in terms of number of genes, associated protein domains and number of classes in each taxa. Our analyses show that new classes (i.e., paralogs) and domain architectures were continuously generated throughout eukaryote evolution, with a significant expansion of myosin abundance and domain architectural diversity at the stem of Holozoa, predating the origin of animal multicellularity. Indeed, single-celled holozoans have the most complex myosin complement among eukaryotes, with paralogs of most myosins previously considered animal specific. We recover a dynamic evolutionary history, with several lineage-specific expansions (e.g., the myosin III-like gene family diversification in choanoflagellates), convergence in protein domain architectures (e.g., fungal and animal chitin synthase myosins), and important secondary losses. Overall, our evolutionary scheme demonstrates that the ancestral eukaryote likely had a complex myosin repertoire that included six genes with different protein domain architectures. Finally, we provide an integrative and robust classification, useful for future genomic and functional studies on this crucial eukaryotic gene family.LeverhulmeBBSRCERCMINECONERCGordon and Betty Moore Foundatio
Low-Spin Spectroscopy of 50Mn
The data on low spin states in the odd-odd nucleus 50Mn investigated with the
50Cr(p,ngamma)50Mn fusion evaporation reaction at the FN-TANDEM accelerator in
Cologne are reported. Shell model and collective rotational model
interpretations of the data are given.Comment: 7 pages, 2 figures, to be published in the proceedings of the
"Bologna 2000 - Structure of the Nucleus at the Dawn of the Century"
Conference, (Bologna, Italy, May 29 - June 3, 2000
Endothelial-Mesenchymal Transition of Brain Endothelial Cells: Possible Role during Metastatic Extravasation
Cancer progression towards metastasis follows a defined sequence of events described as the metastatic cascade. For extravasation and transendothelial migration metastatic cells interact first with endothelial cells. Yet the role of endothelial cells during the process of metastasis formation and extravasation is still unclear, and the interaction between metastatic and endothelial cells during transendothelial migration is poorly understood. Since tumor cells are well known to express TGF-beta, and the compact endothelial layer undergoes a series of changes during metastatic extravasation (cell contact disruption, cytoskeletal reorganization, enhanced contractility), we hypothesized that an EndMT may be necessary for metastatic extravasation. We demonstrate that primary cultured rat brain endothelial cells (BEC) undergo EndMT upon TGF-beta 1 treatment, characterized by the loss of tight and adherens junction proteins, expression of fibronectin, beta 1-integrin, calponin and a-smooth muscle actin (SMA). B16/F10 cell line conditioned and activated medium (ACM) had similar effects: claudin-5 down-regulation, fibronectin and SMA expression. Inhibition of TGF-beta signaling during B16/F10 ACM stimulation using SB-431542 maintained claudin-5 levels and mitigated fibronectin and SMA expression. B16/F10 ACM stimulation of BECs led to phosphorylation of Smad2 and Smad3. SB-431542 prevented SMA up-regulation upon stimulation of BECs with A2058, MCF-7 and MDA-MB231 ACM as well. Moreover, B16/F10 ACM caused a reduction in trans-endothelial electrical resistance, enhanced the number of melanoma cells adhering to and transmigrating through the endothelial layer, in a TGF-beta-dependent manner. These effects were not confined to BECs: HUVECs showed TGF-beta-dependent SMA expression when stimulated with breast cancer cell line ACM. Our results indicate that an EndMT may be necessary for metastatic transendothelial migration, and this transition may be one of the potential mechanisms occurring during the complex phenomenon known as metastatic extravasation
CLIPCleaner: Cleaning Noisy Labels with CLIP
Learning with Noisy labels (LNL) poses a significant challenge for the Machine Learning community. Some of the most widely used approaches that select as clean samples for which the model itself (the in-training model) has high confidence, e.g., 'small loss', can suffer from the so called 'self-confirmation' bias. This bias arises because the in-training model, is at least partially trained on the noisy labels. Furthermore, in the classification case, an additional challenge arises because some of the label noise is between classes that are visually very similar ('hard noise'). This paper addresses these challenges by proposing a method (CLIPCleaner) that leverages CLIP, a powerful Vision-Language (VL) model for constructing a zero-shot classifier for efficient, offline, clean sample selection. This has the advantage that the sample selection is decoupled from the in-training model and that the sample selection is aware of the semantic and visual similarities between the classes due to the way that CLIP is trained. We provide theoretical justifications and empirical evidence to demonstrate the advantages of CLIP for LNL compared to conventional pre-trained models. Compared to current methods that combine iterative sample selection with various techniques, CLIPCleaner offers a simple, single-step approach that achieves competitive or superior performance on benchmark datasets. To the best of our knowledge, this is the first time a VL model has been used for sample selection to address the problem of Learning with Noisy Labels (LNL), highlighting their potential in the domain
Intrinsically determined cell death of developing cortical interneurons
Cortical inhibitory circuits are formed by GABAergic interneurons, a cell population that originates far from the cerebral cortex in the embryonic ventral forebrain. Given their distant developmental origins, it is intriguing how the number of cortical interneurons is ultimately determined. One possibility, suggested by the neurotrophic hypothesis1-5, is that cortical interneurons are overproduced, and then following their migration into cortex, excess interneurons are eliminated through a competition for extrinsically derived trophic signals. Here we have characterized the developmental cell death of mouse cortical interneurons in vivo, in vitro, and following transplantation. We found that 40% of developing cortical interneurons were eliminated through Bax- (Bcl-2 associated X-) dependent apoptosis during postnatal life. When cultured in vitro or transplanted into the cortex, interneuron precursors died at a cellular age similar to that at which endogenous interneurons died during normal development. Remarkably, over transplant sizes that varied 200-fold, a constant fraction of the transplanted population underwent cell death. The death of transplanted neurons was not affected by the cell-autonomous disruption of TrkB (tropomyosin kinase receptor B), the main neurotrophin receptor expressed by central nervous system (CNS) neurons6-8. Transplantation expanded the cortical interneuron population by up to 35%, but the frequency of inhibitory synaptic events did not scale with the number of transplanted interneurons. Together, our findings indicate that interneuron cell death is intrinsically determined, either cell-autonomously, or through a population-autonomous competition for survival signals derived from other interneurons
Recommended from our members
Attribute-Preserving Face Dataset Anonymization via Latent Code Optimization
This work addresses the problem of anonymizing the identity of faces in a dataset of images, such that the privacy of those depicted is not violated, while at the same time the dataset is useful for downstream task such as for training machine learning models. To the best of our knowledge, we are the first to explicitly address this issue and deal with two major drawbacks of the existing state-of-the-art approaches, namely that they (i) require the costly training of additional, purpose-trained neural networks, and/or (ii) fail to retain the facial attributes of the original images in the anonymized counterparts, the preservation of which is of paramount importance for their use in downstream tasks. We accordingly present a task-agnostic anonymization procedure that directly optimizes the images' latent representation in the latent space of a pretrained GAN. By optimizing the latent codes directly, we ensure both that the identity is of a desired distance away from the original (with an identity obfuscation loss), whilst preserving the facial attributes (using a novel feature-matching loss in FaRL's [48] deep feature space). We demonstrate through a series of both qualitative and quantitative experiments that our method is capable of anonymizing the identity of the images whilst-crucially-better-preserving the facial attributes. We make the code and the pretrained models publicly available at: https://github.com/chi0tzp/FALCO
AMIGOS: A Dataset for Affect, Personality and Mood Research on Individuals and Groups
IEEE We present AMIGOS-- A dataset for Multimodal research of affect, personality traits and mood on Individuals and GrOupS. Different to other databases, we elicited affect using both short and long videos in two social contexts, one with individual viewers and one with groups of viewers. The database allows the multimodal study of people's affective responses, by means of neuro-physiological signals, and their relation with personality, mood, social context and stimuli duration. The data is collected in two experimental settings. In the first one, 40 participants watched 16 short emotional videos. In the second one, they watched 4 long videos, some of them alone and the rest in groups. The participants' signals, namely, Electroencephalogram (EEG), Electrocardiogram (ECG) and Galvanic Skin Response (GSR), were recorded using wearable sensors. Participants' frontal HD video and both RGB and depth full body videos were also recorded. Participants emotions have been annotated with both self-assessment of affective levels (valence, arousal, control, familiarity, liking and basic emotions) and external-assessment of valence and arousal. We present a detailed correlation analysis of the different dimensions as well as baseline methods and results for single-trial classification of valence, arousal, personality traits, mood and social context. The database is publicly available
Recommended from our members
Improving Fairness using Vision-Language Driven Image Augmentation
Fairness is crucial when training a deep-learning discriminative model, especially in the facial domain. Models tend to correlate specific characteristics (such as age and skin color) with unrelated attributes (downstream tasks), resulting in biases which do not correspond to reality. It is common knowledge that these correlations are present in the data and are then transferred to the models during training (e.g., [35]). This paper proposes a method to mitigate these correlations to improve fairness. To do so, we learn interpretable and meaningful paths lying in the se- mantic space of a pre-trained diffusion model (DiffAE) [27] – such paths being supervised by contrastive text dipoles. That is, we learn to edit protected characteristics (age and skin color). These paths are then applied to augment images to improve the fairness of a given dataset. We test the proposed method on CelebA-HQ and UTKFace on several downstream tasks with age and skin color as protected characteristics. As a proxy for fairness, we compute the difference in accuracy with respect to the protected characteristics. Quantitative results show how the augmented images help the model improve the overall accuracy, the aforementioned metric, and the disparity of equal opportunity. Code is available at: https://github.com/Moreno98/Vision-Language-Bias-Control
- …
