Back
Liquid: LFM 40B MoE
Other
Input: text
Output: text
Released: Sep 30, 2024•Updated: Mar 28, 2025
Liquid's 40.3B Mixture of Experts (MoE) model. Liquid Foundation Models (LFMs) are large neural networks built with computational units rooted in dynamic systems.
LFMs are general-purpose AI models that can be used to model any kind of sequential data, including video, audio, text, time series, and signals.
See the launch announcement for benchmarks and more info.
32,768 Token Context
Process and analyze large documents and conversations.
Advanced Coding
Improved capabilities in front-end development and full-stack updates.
Agentic Workflows
Autonomously navigate multi-step processes with improved reliability.
Available On
Provider | Model ID | Context | Max Output | Input Cost | Output Cost | Throughput | Latency |
---|---|---|---|---|---|---|---|
Liquid | liquid | 33K | - | $0.15/M | $0.15/M | 176.9 t/s | 299 ms |
Lambda | lambda | 66K | 66K | $0.15/M | $0.15/M | 80.8 t/s | 185 ms |
Standard Pricing
Input Tokens
$0.00000015
per 1K tokens
Output Tokens
$0.00000015
per 1K tokens