1,901 research outputs found
Role of Herpes Simplex Virus Type 1 (HSV-1) Glycoprotein K (gK) Pathogenic CD8+ T Cells in Exacerbation of Eye Disease
HSV-1-induced corneal scarring (CS), also broadly referred to as Herpes Stromal Keratitis (HSK), is the leading cause of infectious blindness in developed countries. It is well-established that HSK is in fact an immunopathological disease. The contribution of the potentially harmful T cell effectors that lead to CS remains an area of intense study. Although the HSV-1 gene(s) involved in eye disease is not yet known, we have demonstrated that gK, which is one of the 12 known HSV-1 glycoproteins, has a crucial role in CS. Immunization of HSV-1 infected mice with gK, but not with any other known HSV-1 glycoprotein, significantly exacerbates CS, and dermatitis. The gK-induced eye disease occurs independently of the strain of the virus or mouse. HSV-1 mutants that lack gK are unable to efficiently infect and establish latency in neurons. HSV-1 recombinant viruses expressing two additional copies of the gK (total of three gK genes) exacerbated CS as compared with wild type HSV-1 strain McKrae that contains one copy of gK. Furthermore, we have shown that an 8mer (ITAYGLVL) within the signal sequence of gK enhanced CS in ocularly infected BALB/c mice, C57BL/6 mice, and NZW rabbits. In HSV-infected “humanized” HLA-A*0201 transgenic mice, this gK 8mer induced strong IFN-γ-producing cytotoxic CD8+ T cell responses. gK induced CS is dependent on gK binding to signal peptide peptidase (SPP). gK also binds to HSV-1 UL20, while UL20 binds GODZ (DHHC3) and these quadruple interactions are required for gK induced pathology. Thus, potential therapies might include blocking of gK-SPP, gK-UL20, UL20-GODZ interactions, or a combination of these strategies
Combined Scaling for Open-Vocabulary Image Classification
We present a combined scaling method - named BASIC - that achieves 85.7%
top-1 accuracy on the ImageNet ILSVRC-2012 validation set without learning from
any labeled ImageNet example. This accuracy surpasses best published similar
models - CLIP and ALIGN - by 9.3%. Our BASIC model also shows significant
improvements in robustness benchmarks. For instance, on 5 test sets with
natural distribution shifts such as ImageNet-{A,R,V2,Sketch} and ObjectNet, our
model achieves 84.3% top-1 average accuracy, only a small drop from its
original ImageNet accuracy.
To achieve these results, we scale up the contrastive learning framework of
CLIP and ALIGN in three dimensions: data size, model size, and batch size. Our
dataset has 6.6B noisy image-text pairs, which is 4x larger than ALIGN, and 16x
larger than CLIP. Our largest model has 3B weights, which is 3.75x larger in
parameters and 8x larger in FLOPs than ALIGN and CLIP. Finally, our batch size
is 65536 which is 2x more than CLIP and 4x more than ALIGN.
We encountered two main challenges with the scaling rules of BASIC. First,
the main challenge with implementing the combined scaling rules of BASIC is the
limited memory of accelerators, such as GPUs and TPUs. To overcome the memory
limit, we propose two simple methods which make use of gradient checkpointing
and model parallelism. Second, while increasing the dataset size and the model
size has been the defacto method to improve the performance of deep learning
models like BASIC, the effect of a large contrastive batch size on such
contrastive-trained image-text models is not well-understood. To shed light on
the benefits of large contrastive batch sizes, we develop a theoretical
framework which shows that larger contrastive batch sizes lead to smaller
generalization gaps for image-text models such as BASIC
- …