Best long-context AI models

AI models with the largest context windows — ranked for processing long documents, codebases, and transcripts.

Sorted by context window size, then by intelligence. Useful for RAG, long-document summarization, and whole-repo analysis.

  1. 1

    Gemini 2 Pro

    82.8Frontier
    Google
    Context:
    2M
    Input:
    $1.25/1M
    Output:
    $5/1M
    VisionMathAgenticLong contextFrontierCode
  2. 2

    Grok 4.1 Fast

    xAI
    Context:
    2M
    Input:
    $0.2/1M
    Output:
    $0.5/1M
    VisionLong contextBudget
  3. 3

    Grok 4.20

    xAI
    Context:
    2M
    Input:
    $2/1M
    Output:
    $6/1M
    VisionLong context
  4. 4

    Grok 4.20 Multi-Agent

    xAI
    Context:
    2M
    Input:
    $2/1M
    Output:
    $6/1M
    VisionLong context
  5. 5

    GPT-5.4

    59.3Competent
    OpenAI
    Context:
    1.1M
    VisionAgenticLong context
  6. 6

    GPT-5.4 Pro

    OpenAI
    Context:
    1.1M
    Input:
    $30/1M
    Output:
    $180/1M
    VisionLong context
  7. 7

    Gemini 2 Flash

    Google
    Context:
    1.0M
    Input:
    $0.1/1M
    Output:
    $0.4/1M
    VisionAgenticLong contextBudget
  8. 8

    Gemini 2.0 Flash

    Google
    Context:
    1.0M
    Input:
    $0.1/1M
    Output:
    $0.4/1M
    VisionLong contextBudget
  9. 9

    Gemini 2.0 Flash Lite

    Google
    Context:
    1.0M
    Input:
    $0.075/1M
    Output:
    $0.3/1M
    VisionLong contextBudget
  10. 10

    Gemini 2.5 Flash

    Google
    Context:
    1.0M
    VisionAgenticLong context
  11. 11

    Gemini 2.5 Flash Lite

    Google
    Context:
    1.0M
    VisionAgenticLong context
  12. 12

    Gemini 2.5 Pro

    Google
    Context:
    1.0M
    VisionAgenticLong context
  13. 13

    GPT-4.1

    OpenAI
    Context:
    1.0M
    VisionAgenticLong context
  14. 14

    GPT-4.1 mini

    OpenAI
    Context:
    1.0M
    VisionAgenticLong context
  15. 15

    GPT-4.1 nano

    OpenAI
    Context:
    1.0M
    VisionAgenticLong context
  16. 16

    Qwen Plus 0728

    Alibaba
    Context:
    1M
    Input:
    $0.26/1M
    Output:
    $0.78/1M
    Long contextBudget
  17. 17

    Qwen Plus 0728 (thinking)

    Alibaba
    Context:
    1M
    Input:
    $0.26/1M
    Output:
    $0.78/1M
    Long contextBudget
  18. 18

    Qwen-Plus

    Alibaba
    Context:
    1M
    Input:
    $0.26/1M
    Output:
    $0.78/1M
    Long contextBudget
  19. 19

    Qwen3 Coder Flash

    Alibaba
    Context:
    1M
    Input:
    $0.195/1M
    Output:
    $0.975/1M
    Long contextBudget
  20. 20

    Qwen3 Coder Plus

    Alibaba
    Context:
    1M
    Input:
    $0.65/1M
    Output:
    $3.25/1M
    Long context