6 research outputs found
Self Supervision Does Not Help Natural Language Supervision at Scale
Self supervision and natural language supervision have emerged as two
exciting ways to train general purpose image encoders which excel at a variety
of downstream tasks. Recent works such as M3AE and SLIP have suggested that
these approaches can be effectively combined, but most notably their results
use small pre-training datasets (<50M samples) and don't effectively reflect
the large-scale regime (>100M examples) that is commonly used for these
approaches. Here we investigate whether a similar approach can be effective
when trained with a much larger amount of data. We find that a combination of
two state of the art approaches: masked auto-encoders, MAE and contrastive
language image pre-training, CLIP provides a benefit over CLIP when trained on
a corpus of 11.3M image-text pairs, but little to no benefit (as evaluated on a
suite of common vision tasks) over CLIP when trained on a large corpus of 1.4B
images. Our work provides some much needed clarity into the effectiveness (or
lack thereof) of self supervision for large-scale image-text training
Ferret-UI: Grounded Mobile UI Understanding with Multimodal LLMs
Recent advancements in multimodal large language models (MLLMs) have been
noteworthy, yet, these general-domain MLLMs often fall short in their ability
to comprehend and interact effectively with user interface (UI) screens. In
this paper, we present Ferret-UI, a new MLLM tailored for enhanced
understanding of mobile UI screens, equipped with referring, grounding, and
reasoning capabilities. Given that UI screens typically exhibit a more
elongated aspect ratio and contain smaller objects of interest (e.g., icons,
texts) than natural images, we incorporate "any resolution" on top of Ferret to
magnify details and leverage enhanced visual features. Specifically, each
screen is divided into 2 sub-images based on the original aspect ratio (i.e.,
horizontal division for portrait screens and vertical division for landscape
screens). Both sub-images are encoded separately before being sent to LLMs. We
meticulously gather training samples from an extensive range of elementary UI
tasks, such as icon recognition, find text, and widget listing. These samples
are formatted for instruction-following with region annotations to facilitate
precise referring and grounding. To augment the model's reasoning ability, we
further compile a dataset for advanced tasks, including detailed description,
perception/interaction conversations, and function inference. After training on
the curated datasets, Ferret-UI exhibits outstanding comprehension of UI
screens and the capability to execute open-ended instructions. For model
evaluation, we establish a comprehensive benchmark encompassing all the
aforementioned tasks. Ferret-UI excels not only beyond most open-source UI
MLLMs, but also surpasses GPT-4V on all the elementary UI tasks
Mobile V-MoEs: Scaling Down Vision Transformers via Sparse Mixture-of-Experts
Sparse Mixture-of-Experts models (MoEs) have recently gained popularity due
to their ability to decouple model size from inference efficiency by only
activating a small subset of the model parameters for any given input token. As
such, sparse MoEs have enabled unprecedented scalability, resulting in
tremendous successes across domains such as natural language processing and
computer vision. In this work, we instead explore the use of sparse MoEs to
scale-down Vision Transformers (ViTs) to make them more attractive for
resource-constrained vision applications. To this end, we propose a simplified
and mobile-friendly MoE design where entire images rather than individual
patches are routed to the experts. We also propose a stable MoE training
procedure that uses super-class information to guide the router. We empirically
show that our sparse Mobile Vision MoEs (V-MoEs) can achieve a better trade-off
between performance and efficiency than the corresponding dense ViTs. For
example, for the ViT-Tiny model, our Mobile V-MoE outperforms its dense
counterpart by 3.39% on ImageNet-1k. For an even smaller ViT variant with only
54M FLOPs inference cost, our MoE achieves an improvement of 4.66%
MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training
In this work, we discuss building performant Multimodal Large Language Models
(MLLMs). In particular, we study the importance of various architecture
components and data choices. Through careful and comprehensive ablations of the
image encoder, the vision language connector, and various pre-training data
choices, we identified several crucial design lessons. For example, we
demonstrate that for large-scale multimodal pre-training using a careful mix of
image-caption, interleaved image-text, and text-only data is crucial for
achieving state-of-the-art (SOTA) few-shot results across multiple benchmarks,
compared to other published pre-training results. Further, we show that the
image encoder together with image resolution and the image token count has
substantial impact, while the vision-language connector design is of
comparatively negligible importance. By scaling up the presented recipe, we
build MM1, a family of multimodal models up to 30B parameters, including both
dense models and mixture-of-experts (MoE) variants, that are SOTA in
pre-training metrics and achieve competitive performance after supervised
fine-tuning on a range of established multimodal benchmarks. Thanks to
large-scale pre-training, MM1 enjoys appealing properties such as enhanced
in-context learning, and multi-image reasoning, enabling few-shot
chain-of-thought prompting
Bias in Automated Image Colorization: Metrics and Error Types
We measure the color shifts present in colorized images from the ADE20K dataset, when colorized by the automatic GAN-based DeOldify model. We introduce fine-grained local and regional bias measurements between the original and the colorized images, and observe many colorization effects. We confirm a general desaturation effect, and also provide novel observations: a shift towards the training average, a pervasive blue shift, different color shifts among image categories, and a manual categorization of colorization errors in three classes
Apple Intelligence Foundation Language Models
We present foundation language models developed to power Apple Intelligence
features, including a ~3 billion parameter model designed to run efficiently on
devices and a large server-based language model designed for Private Cloud
Compute. These models are designed to perform a wide range of tasks
efficiently, accurately, and responsibly. This report describes the model
architecture, the data used to train the model, the training process, how the
models are optimized for inference, and the evaluation results. We highlight
our focus on Responsible AI and how the principles are applied throughout the
model development
