Multimodal Foundation Models: From Specialists to General-Purpose Assistants (Foundations and Trends® in Computer Graphics and Vision)

by Chunyuan Li, Zhe Gan, Zhengyuan Yang, Jianwei Yang, Linjie Li, Lijuan Wang, and Jianfeng Gao

0 ratings • 0 reviews • 0 shelved
Book cover for Multimodal Foundation Models

Bookhype may earn a small commission from qualifying purchases. Full disclosure.

This monograph presents a comprehensive survey of the taxonomy and evolution of multimodal foundation models that demonstrate vision and vision-language capabilities, focusing on the transition from specialist models to general-purpose assistants.

The focus encompasses five core topics, categorized into two classes; (i) a survey of well-established research areas: multimodal foundation models pre-trained for specific purposes, including two topics – methods of learning vision backbones for visual understanding and text-to-image generation; (ii) recent advances in exploratory, open research areas: multimodal foundation models that aim to play the role of general-purpose assistants, including three topics – unified vision models inspired by large language models (LLMs), end-to-end training of multimodal LLMs, and chaining multimodal tools with LLMs.

The target audience of the monograph is researchers, graduate students, and professionals in computer vision and vision-language multimodal communities who are eager to learn the basics and recent advances in multimodal foundation models.
  • ISBN13 9781638283362
  • Publish Date 6 May 2024
  • Publish Status Active
  • Publish Country US
  • Imprint now publishers Inc