Papers
arxiv:2404.10407

Comprehensive Survey of Model Compression and Speed up for Vision Transformers

Published on Apr 16, 2024
Authors:
,
,
,
,

Abstract

Four model compression techniques are evaluated to improve ViT performance on resource-constrained environments, achieving a balance between accuracy and efficiency.

AI-generated summary

Vision Transformers (ViT) have marked a paradigm shift in computer vision, outperforming state-of-the-art models across diverse tasks. However, their practical deployment is hampered by high computational and memory demands. This study addresses the challenge by evaluating four primary model compression techniques: quantization, low-rank approximation, knowledge distillation, and pruning. We methodically analyze and compare the efficacy of these techniques and their combinations in optimizing ViTs for resource-constrained environments. Our comprehensive experimental evaluation demonstrates that these methods facilitate a balanced compromise between model accuracy and computational efficiency, paving the way for wider application in edge computing devices.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2404.10407
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2404.10407 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2404.10407 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2404.10407 in a Space README.md to link it from this page.

Collections including this paper 6