Training large vision-language models requires extensive, high-quality
image-text pairs. Existing web-scraped datasets, however, are noisy and lack
detailed image descriptions. To bridge this gap, we introduce PixelProse, a
comprehensive dataset of over 16M (million) synthetically generated captions,
leveraging cutting-edge vision-language models for detailed and accurate
descriptions. To ensure data integrity, we rigorously analyze our dataset for
problematic content, including child sexual abuse material (CSAM), personally
identifiable information (PII), and toxicity. We also provide valuable metadata
such as watermark presence and aesthetic scores, aiding in further dataset
filtering. We hope PixelProse will be a valuable resource for future
vision-language research. PixelProse is available at
https://huggingface.co/datasets/tomg-group-umd/pixelproseComment: pixelprose 16M datase