Convergent Evolution (Architecture and Optimizer)
Collection
8 items • Updated
A 300M-parameter language model trained from scratch on FineWeb-Edu sample-10BT (~9.4B tokens) as part of the Convergent Evolution project, which investigates how Fourier features emerge in LLM number embeddings.
| Architecture | GDN (Gated Diagonal Network) |
| Parameters | ~300M |
| Optimizer | AdamW |
| Data perturbation | standard (unperturbed) text |
| Training data | FineWeb-Edu sample-10BT (~9.4B tokens) |
| Context length | 1024 |
| Tokenizer | Llama 3 (128K vocab) |
| Batch size | 512 sequences |
from transformers import AutoModelForCausalLM
# Load final checkpoint
model = AutoModelForCausalLM.from_pretrained("deqing/convergent-gdn-300M-adamw-original")
Intermediate checkpoints are saved as branches: tokens-200M, tokens-400M, ...
# Load intermediate checkpoint (e.g., at 1B tokens)
model = AutoModelForCausalLM.from_pretrained("deqing/convergent-gdn-300M-adamw-original", revision="tokens-1B")
Paper forthcoming.