Share this page:

Text Encoders are Performance Bottlenecks in Contrastive Vision-Language Models

Amita Kamath, Jack Hessel, and Kai-Wei Chang, in Arxiv, 2023.

Download the full text


Abstract

Performant vision-language (VL) models like CLIP represent captions using a single vector. How much information about language is lost in this bottleneck? We first curate CompPrompts, a set of increasingly compositional image captions that VL models should be able to capture (e.g., single object, to object+property, to multiple interacting objects). Then, we train text-only recovery probes that aim to reconstruct captions from single-vector text representations produced by several VL models. This approach doesn’t require images, allowing us to test on a broader range of scenes compared to prior work. We find that: 1) CLIP’s text encoder falls short on object relationships, attribute-object association, counting, and negations; 2) some text encoders work significantly better than others; and 3) text-only recovery performance predicts multi-modal matching performance on ControlledImCaps: a new evaluation benchmark we collect+release consisting of fine-grained compositional images+captions. Specifically – our results suggest text-only recoverability is a necessary (but not sufficient) condition for modeling compositional factors in contrastive vision+language models.


Bib Entry

@inproceedings{kamath2023textencoders,
  author = {Kamath, Amita and Hessel, Jack and Chang, Kai-Wei},
  title = {Text Encoders are Performance Bottlenecks in Contrastive Vision-Language Models},
  booktitle = {Arxiv},
  year = {2023}
}

Related Publications