49 research outputs found

    Human plasma protein N-glycosylation

    Full text link

    CD80 and PD-L2 define functionally distinct memory B cell subsets that are independent of antibody isotype.

    No full text
    Memory B cells (MBCs) are long-lived sources of rapid, isotype-switched secondary antibody-forming cell (AFC) responses. Whether MBCs homogeneously retain the ability to self-renew and terminally differentiate or if these functions are compartmentalized into MBC subsets has remained unclear. It has been suggested that antibody isotype controls MBC differentiation upon restimulation. Here we demonstrate that subcategorizing MBCs on the basis of their expression of CD80 and PD-L2, independently of isotype, identified MBC subsets with distinct functions upon rechallenge. CD80(+)PD-L2(+) MBCs differentiated rapidly into AFCs but did not generate germinal centers (GCs); conversely, CD80(-)PD-L2(-) MBCs generated few early AFCs but robustly seeded GCs. The gene-expression patterns of the subsets supported both the identity and function of these distinct MBC types. Hence, the differentiation and regeneration of MBCs are compartmentalized

    Deep Neural Networks for Structured Data

    No full text
    Learning machines for pattern recognition, such as neural networks or support vector machines, are usually conceived to process real–valued vectors with predefined dimensionality even if, in many real–world applications, relevant information is inherently organized into entities and relationships between them. Instead, Graph Neural Networks (GNNs) can directly process structured data, guaranteeing universal approximation of many practically useful functions on graphs. GNNs, that do not strictly meet the definition of deep architectures, are based on the unfolding mechanism during learning, that, in practice, yields networks that have the same depth of the data structures they process. However, GNNs may be hindered by the long–term dependency problem, i.e. the difficulty in taking into account information coming from peripheral nodes within graphs — due to the local nature of the procedures for updating the state and the weights. To overcome this limitation, GNNs may be cascaded to form layered architectures, called Layered GNNs (LGNNs). Each GNN in the cascade is trained based on the original graph “enriched” with the information computed by the previous layer, to implement a sort of incremental learning framework, able to take into account progressively further information. The applicability of LGNNs will be illustrated both with respect to a classical problem in graph–theory and to pattern recognition problems in bioinformatics
    corecore