Papers
arxiv:2412.05276

Sparse autoencoders reveal selective remapping of visual concepts during adaptation

Published on Dec 6, 2024
Authors:
,
,
,

Abstract

A Sparse Autoencoder (SAE) named PatchSAE is developed to extract interpretable concepts and their spatial attributions from the CLIP vision transformer, offering insights into adaptation mechanisms in machine learning systems.

AI-generated summary

Adapting foundation models for specific purposes has become a standard approach to build machine learning systems for downstream applications. Yet, it is an open question which mechanisms take place during adaptation. Here we develop a new Sparse Autoencoder (SAE) for the CLIP vision transformer, named PatchSAE, to extract interpretable concepts at granular levels (e.g., shape, color, or semantics of an object) and their patch-wise spatial attributions. We explore how these concepts influence the model output in downstream image classification tasks and investigate how recent state-of-the-art prompt-based adaptation techniques change the association of model inputs to these concepts. While activations of concepts slightly change between adapted and non-adapted models, we find that the majority of gains on common adaptation tasks can be explained with the existing concepts already present in the non-adapted foundation model. This work provides a concrete framework to train and use SAEs for Vision Transformers and provides insights into explaining adaptation mechanisms.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2412.05276
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2412.05276 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2412.05276 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2412.05276 in a Space README.md to link it from this page.

Collections including this paper 1