Thursday, August 11, 2022
HomeArtificial IntelligenceRewriting Picture Captions for Visible Query Answering Information Creation

Rewriting Picture Captions for Visible Query Answering Information Creation


Visible Query Answering (VQA) is a helpful machine studying (ML) process that requires a mannequin to reply a visible query about a picture. What makes it difficult is its multi-task and open-ended nature; it entails fixing a number of technical analysis questions in laptop imaginative and prescient and pure language understanding concurrently. But, progress on this process would allow a variety of purposes, from aiding the blind and the visually-impaired or speaking with robots to enhancing the consumer’s visible expertise with exterior data.

Efficient and strong VQA programs can not exist with out high-quality, semantically and stylistically various large-scale coaching information of image-question-answer triplets. However, creating such information is time consuming and onerous. Maybe unsurprisingly, the VQA group has centered extra on refined mannequin improvement reasonably than scalable information creation.

In “All You Might Want for VQA are Picture Captions,” revealed at NAACL 2022, we discover VQA information era by proposing “Visible Query Technology with Query Answering Validation” (VQ2A), a pipeline that works by rewriting a declarative caption into a number of interrogative question-answer pairs. Extra particularly, we leverage two current property — (i) large-scale image-text information and (ii) large-capacity neural text-to-text fashions — to attain computerized VQA information era. As the sector has progressed, the analysis group has been making these property bigger and stronger in isolation (for common functions similar to studying text-only or image-text representations); collectively, they’ll obtain extra and we adapt them for VQA information creation functions. We discover our method can generate question-answer pairs with excessive precision and that this information can efficiently be used for coaching VQA fashions to enhance efficiency.

The VQ2A method permits VQA information era at scale from picture captions by rewriting every caption into a number of question-answer pairs.

VQ2A Overview
Step one of the VQ2A method is to use heuristics based mostly on named entity recognition, part-of-speech tagging and manually outlined guidelines to generate reply candidates from the picture caption. These generated candidates are small items of data which may be related topics about which to ask questions. We additionally add to this checklist two default solutions, “sure” and “no”, which permit us to generate Boolean questions.

Then, we use a T5 mannequin that was fine-tuned to generate questions for the candidate, leading to [question, candidate answer] pairs. We then filter for the very best high quality pairs utilizing one other T5 mannequin (fine-tuned to reply questions) by asking it to reply the query based mostly on the caption. was . That’s, we evaluate the candidate reply to the output of this mannequin and if the 2 solutions are comparable sufficient, we outline this query as top quality and maintain it. In any other case, we filter it out.

The concept of utilizing each query answering and query era fashions to examine one another for his or her round-trip consistency has been beforehand explored in different contexts. For example, Q2 makes use of this concept to judge factual consistency in knowledge-grounded dialogues. In the long run, the VQ2A method, as illustrated beneath, can generate a lot of [image, question, answer] triplets which are high-quality sufficient for use as VQA coaching information.

VQ2A consists of three important steps: (i) candidate reply extraction, (ii) query era, (iii) query answering and reply validation.

Outcomes
Two examples of our generated VQA information are proven beneath, one based mostly on human-written COCO Captions (COCO) and the opposite on automatically-collected Conceptual Captions (CC3M), which we name VQ2A-COCO and VQ2A-CC3M, respectively. We spotlight the number of query varieties and kinds, that are crucial for VQA. Total, the cleaner the captions (i.e., the extra carefully associated they’re to their paired picture), the extra correct the generated triplets. Based mostly on 800 samples every, 87.3% of VQ2A-COCO and 66.0% VQ2A-CC3M are discovered by human raters to be legitimate, suggesting that our method can generate question-answer pairs with excessive precision.

Generated question-answer pairs based mostly on COCO Captions (prime) and Conceptual Captions (backside). Gray highlighting denotes questions that do not seem in VQAv2, whereas inexperienced highlighting denotes people who do, indicating that our method is able to producing novel questions that an current VQA dataset doesn’t have.

Lastly, we consider our generated information by utilizing it to coach VQA fashions (highlights proven beneath). We observe that our automatically-generated VQA information is aggressive with manually-annotated goal VQA information. First, our VQA fashions obtain excessive efficiency heading in the right direction benchmarks “out-of-the-box”, when skilled solely on our generated information (gentle blue and light-weight purple vs. yellow). As soon as fine-tuned heading in the right direction information, our VQA fashions outperform target-only coaching barely on large-scale benchmarks like VQAv2 and GQA, however considerably on the small, knowledge-seeking OK-VQA (darkish blue/purple vs. gentle blue/purple).

VQA accuracy on in style benchmark datasets.

Conclusion
All we may have for VQA are picture captions! This work demonstrates that it’s doable to robotically generate high-quality VQA information at scale, serving as a necessary constructing block for VQA and vision-and-language fashions on the whole (e.g., ALIGN, CoCa). We hope that our work evokes different work on data-centric VQA.

Acknowledgments
We thank Roee Aharoni, Idan Szpektor, and Radu Soricut for his or her suggestions on this blogpost. We additionally thank our co-authors: Xi Chen, Nan Ding, Idan Szpektor, and Radu Soricut. We acknowledge contributions from Or Honovich, Hagai Taitelbaum, Roee Aharoni, Sebastian Goodman, Piyush Sharma, Nassim Oufattole, Gal Elidan, Sasha Goldshtein, and Avinatan Hassidim. Lastly, we thank the authors of Q2, whose pipeline strongly influences this work.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments