Search/
Skip to content
/
OpenRouter
© 2026 OpenRouter, Inc

Product

  • Chat
  • Rankings
  • Models
  • Providers
  • Pricing
  • Enterprise

Company

  • About
  • Announcements
  • CareersHiring
  • Privacy
  • Terms of Service
  • Support
  • State of AI
  • Works With OR

Developer

  • Documentation
  • API Reference
  • SDK
  • Status

Connect

  • Discord
  • GitHub
  • LinkedIn
  • X
  • YouTube
Favicon for aion-labs

Aion Labs

Browse models from Aion Labs

4 models

Tokens processed on OpenRouter

  • AionLabs: Aion-2.0Aion-2.0
    32.5M tokens

    Aion-2.0 is a variant of DeepSeek V3.2 optimized for immersive roleplaying and storytelling. It is particularly strong at introducing tension, crises, and conflict into stories, making narratives feel more engaging. It also handles mature and darker themes with more nuance and depth.

    by aion-labsFeb 23, 2026131K context$0.80/M input tokens$1.60/M output tokens
  • AionLabs: Aion-1.0Aion-1.0
    459M tokens

    Aion-1.0 is a multi-model system designed for high performance across various tasks, including reasoning and coding. It is built on DeepSeek-R1, augmented with additional models and techniques such as Tree of Thoughts (ToT) and Mixture of Experts (MoE). It is Aion Lab's most powerful reasoning model.

    by aion-labsFeb 4, 2025131K context$4/M input tokens$8/M output tokens
  • AionLabs: Aion-1.0-MiniAion-1.0-Mini
    7.71M tokens

    Aion-1.0-Mini 32B parameter model is a distilled version of the DeepSeek-R1 model, designed for strong performance in reasoning domains such as mathematics, coding, and logic. It is a modified variant of a FuseAI model that outperforms R1-Distill-Qwen-32B and R1-Distill-Llama-70B, with benchmark results available on its Hugging Face page, independently replicated for verification.

    by aion-labsFeb 4, 2025131K context$0.70/M input tokens$1.40/M output tokens
  • AionLabs: Aion-RP 1.0 (8B)Aion-RP 1.0 (8B)
    32.7M tokens

    Aion-RP-Llama-3.1-8B ranks the highest in the character evaluation portion of the RPBench-Auto benchmark, a roleplaying-specific variant of Arena-Hard-Auto, where LLMs evaluate each other’s responses. It is a fine-tuned base model rather than an instruct model, designed to produce more natural and varied writing.

    by aion-labsFeb 4, 202533K context$0.80/M input tokens$1.60/M output tokens