Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos
Paper • 2206.11795 • Published
Error code: ClientConnectionError
Vision-Language-Action training data for Minecraft, processed from OpenAI's VPT contractor dataset.
This dataset contains frame-action pairs from Minecraft gameplay, designed for training VLA models following the Lumine methodology.
Each sample contains:
| Field | Type | Description |
|---|---|---|
image |
bytes | 640x360 JPEG frame |
video_id |
string | Source video identifier |
frame_idx |
int | Frame number at 5Hz |
action |
string | Lumine-format action string |
<|action_start|> mouse_x mouse_y scroll ; K1 ; K2 ; K3 ; K4 <|action_end|>
mouse_x, mouse_y: Mouse delta (-1000 to 1000)scroll: Hotbar scroll (always 0 - VPT uses number keys)K1 to K4: Key combinations per 50ms chunkExample:
<|action_start|> 45 -12 0 ; W ; W Space ; W LMB ; W LMB <|action_end|>
from datasets import load_dataset
# Streaming (recommended - no download required)
ds = load_dataset("TESS-Computer/minecraft-vla-stage1", split="train", streaming=True)
for sample in ds:
image = sample["image"] # PIL Image or bytes
action = sample["action"]
# Process...
This is Stage 1 of a 3-stage training pipeline:
If you use this dataset, please cite:
MIT License. Original VPT data is released under MIT by OpenAI.