Binary Image Selection (BISON):
Interpretable Evaluation of Visual Grounding
Abstract
Providing systems the ability to relate linguistic and visual content is one of the hallmarks of computer vision. Tasks such as image captioning and retrieval were designed to test this ability, but come with complex evaluation measures that gauge various other abilities and biases simultaneously. This paper presents an alternative evaluation task for visual-grounding systems: given a caption the system is asked to select the image that best matches the caption from a pair of semantically similar images. The system's accuracy on this Binary Image SelectiON (BISON) task is not only interpretable, but also measures the ability to relate fine-grained text content in the caption to visual content in the images. We gathered a BISON dataset that complements the COCO Captions dataset and used this dataset in auxiliary evaluations of captioning and caption-based retrieval systems. While captioning measures suggest visual-grounding systems outperform humans, BISON shows that these systems are still far away from human performance.
Downloads
[ Validation Data ]
[ Evaluation Code ]
Paper
@article{hexiang2018bison,
     title={{Binary Image Selection (BISON): Interpretable Evaluation of Visual Grounding}},
     author={Hu, Hexiang and Misra, Ishan and van der Maaten, Laurens},
     journal={arXiv preprint arXiv:1901.06595},      year={2019}}
     title={{Binary Image Selection (BISON): Interpretable Evaluation of Visual Grounding}},
     author={Hu, Hexiang and Misra, Ishan and van der Maaten, Laurens},
     journal={arXiv preprint arXiv:1901.06595},      year={2019}}
Acknowledgement
This work was performed during Hexiang Hu's summer internship at Facebook AI Research.