Convergent Evolution (Architecture and Optimizer)
Collection
8 items • Updated
A 300M-parameter language model trained from scratch on FineWeb-Edu sample-10BT (~9.4B tokens) as part of the Convergent Evolution project, which investigates how Fourier features emerge in LLM number embeddings.
| Architecture | LSTM (12 layers) |
| Parameters | ~300M |
| Optimizer | Muon (for 2D weights) + AdamW (for embeddings/bias/norm) |
| Data perturbation | standard (unperturbed) text |
| Training data | FineWeb-Edu sample-10BT (~9.4B tokens) |
| Context length | 1024 |
| Tokenizer | Llama 3 (128K vocab) |
| Batch size | 512 sequences |
from transformers import AutoModelForCausalLM
# Load final checkpoint
model = AutoModelForCausalLM.from_pretrained("deqing/convergent-lstm-12layer-muon-original")
Intermediate checkpoints are saved as branches: tokens-200M, tokens-400M, ...
# Load intermediate checkpoint (e.g., at 1B tokens)
model = AutoModelForCausalLM.from_pretrained("deqing/convergent-lstm-12layer-muon-original", revision="tokens-1B")
Paper forthcoming.