LLaVA-PruMerge

Adaptive Token Reduction for Efficient Large
Multimodal Models

arXiv 2024
*Equal Contribution        Equal Advising
1. University of Wisconsin-Madison 2. Illinois Institute of Technology

🔥[NEW!] PruMerge+: After supplementing with spatial tokens, we can further enhance the performance by a large margin. Four times of token reduction rate is achieved with lossless performance gap!

🔥 We find that the visual tokens in current large multimodal models are spatially redundant, indicated by the sparse attention maps.

🔥 We propose LLaVA-PruMerge to first prune and then merge visual tokens, which can compress the visual tokens by 18 times (14 times on MME/TextVQA) on average while maintaining comparable performance.

Abstract

Large Multimodal Models (LMMs) have shown significant reasoning capabilities by connecting a visual encoder and a large language model. LMMs typically use a fixed amount of visual tokens, such as the penultimate layer features in the CLIP visual encoder, as the prefix content. Recent LMMs incorporate more complex visual inputs, such as high-resolution images and videos, which increase the number of visual tokens significantly. However, due to the design of the Transformer architecture, computational costs associated with these models tend to increase quadratically with the number of input tokens. To tackle this problem, we explore a token reduction mechanism and find, similar to prior work, that many visual tokens are spatially redundant. Based on this, we propose PruMerge, a novel adaptive visual token reduction approach, which largely reduces the number of visual tokens while maintaining comparable model performance. We first select the unpruned visual tokens based on their similarity to class tokens and spatial tokens. We then cluster the pruned tokens based on key similarity and merge the clustered tokens with the unpruned tokens to supplement their information. Empirically, when applied to LLaVA-1.5, our approach can compress the visual tokens by 18 times on average (14 times on MME/TextVQA), and achieve comparable performance across diverse visual question-answering and reasoning tasks. Code and checkpoints will be released. To facilitate future research, we will release our code, dataset, benchmark, and checkpoints at https://github.com/42Shawn/LLaVA-PruMerge.

Motivation: Visual Tokens are spatially redundant

Current large multimodal models utlize all visual tokens to represent an image. In LLaVA-1.5, all spatial (24×24=576) tokens are fed into the LLM, which leads to redundancy.

We propose a plug-and-play module to reduce the number of visual tokens, which can be conducted via either training-free or finetuning manner.

Interestly, we oberserve that the activations between the class tokens and spatial tokens in CLIP are very sparse, which can be leveraged to prune the visual tokens.

The conceptual idea of LLaVA-PruMerge

Our approach has 3 steps:

  • Sample important tokens according to the similarities between the class tokens and spatial visual tokens;
  • Cluster the visual tokens via k-nearest neighbor;
  • Adjust the sampled visual tokens via weighted averaging for each cluster. Here m denotes the visual token compression ratio.

Our sampled tokens can better reflect the key information in the image.

We can further ehanace the performance by supplementing with spatial tokens.

Computation Cost Analysis

Our approach can significantly reduce the computation cost. We evaluate on TESLA V100 GPU, and time estimated by the roofline model represents the theoretical performance that the hardware can achieve.

Performance

We achieve comparable performance on LLaVA-1.5 benchmarks

  Our sampled tokens are better than naive visual token sampling

We compare sequential sampling and saptical sampling.


We receive better performance, especailly on tasks that requries detailed information, such as OCR.

BibTeX


        @article{shang2024LLaVA-PruMerge,
          title={LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models},
          author={Shang, Yuzhang and Cai, Mu and Xu, Bingxin and Lee, Yong Jae and Yan, Yan},
          journal={arXiv preprint arXiv:2403.15388},
          year={2024}
        }
  

Acknowledgement

This website is adapted from Nerfies, licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. We thank the LLaMA team for giving us access to their models, and open-source projects, including Alpaca and Vicuna.

Usage and License Notices: The data, code and checkpoint is intended and licensed for research use only. They are also restricted to uses that follow the license agreement of CLIP, LLaMA, Vicuna and GPT-4. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.

Related Links: [CLIP] [LLaVA] [Instruction Tuning with GPT-4]