-
LiquidAI/LFM2.5-VL-450M
Image-Text-to-Text • 0.4B • Updated • 49 -
LFM2.5-VL-450M WebGPU
📹10Live video captioning and object tracking in your browser
-
LiquidAI/LFM2.5-VL-1.6B
Image-Text-to-Text • 2B • Updated • 127k • 267 -
LFM2.5-VL-1.6B WebGPU
🧠86In-browser vision-language inference with LFM2.5-VL-1.6B
AI & ML interests
A new generation of foundation models from first principles.
Recent Activity
View all activity
Library of task-specific models: https://www.liquid.ai/blog/introducing-liquid-nanos-frontier-grade-performance-on-everyday-devices
End-to-end audio foundation model, designed for low latency and real-time conversations
-
LiquidAI/LFM2.5-VL-450M
Image-Text-to-Text • 0.4B • Updated • 49 -
LFM2.5-VL-450M WebGPU
📹10Live video captioning and object tracking in your browser
-
LiquidAI/LFM2.5-VL-1.6B
Image-Text-to-Text • 2B • Updated • 127k • 267 -
LFM2.5-VL-1.6B WebGPU
🧠86In-browser vision-language inference with LFM2.5-VL-1.6B
Collection of post-trained and base LFM2.5 models.
LFM2 is a new generation of hybrid models, designed for on-device deployment.
Library of task-specific models: https://www.liquid.ai/blog/introducing-liquid-nanos-frontier-grade-performance-on-everyday-devices
LFM2-VL is our first series of vision-language models, designed for on-device deployment.
End-to-end audio foundation model, designed for low latency and real-time conversations