Gemma 4 just dropped on Modular, Day Zero! Read More →

Wan 2.2 I2V A14B logo

Wan 2.2 I2V A14B Image-to-Video on Modular

Wan 2.2 I2V A14B is an image-to-video diffusion model from Wan AI. It animates a source image with prompt guidance and supports 480P and 720P video generation using the Wan A14B MoE architecture.

Example Usage

Input
Python

  import base64
  from openai import OpenAI
  
  client = OpenAI(
      base_url="https://model.api.modular.com",
      api_key="<your_api_token>",
  )
  
  response = client.responses.create(
      model="Wan-AI/Wan2.2-I2V-A14B-Diffusers",
      input="A campfire crackles in a forest clearing at night, sparks spiraling upward into a star-filled sky",
      extra_body={
          "provider_options": {
              "video": {
                  "height": 512,
                  "width": 512,
                  "steps": 28,
                  "num_frames": 81,
                  "frames_per_second": 16,
                  "response_format": "b64_json",
              }
          }
      },
  )
  
  video_data = response.output[0].content[0].video_data
  
  with open("output.mp4", "wb") as f:
      f.write(base64.b64decode(video_data))
Model Details
  • Developed by
    Wan AI
  • Model family
    Wan-AI/Wan2.2-I2V-A14B-Diffusers
  • Modality
    Video,
  • Total Params
    27B
  • Precision
    BF16
  • Deployment options
    Shared, Dedicated, Self-hosted

Why choose Wan 2.2 I2V A14B on Modular?

  • High performance, out of the box

    Run leading open models with strong default performance and the ability to optimize down to the kernel — extracting more from every GPU.

  • Lower Infrastructure Costs

    Deploy efficiently across NVIDIA and AMD hardware to reduce GPU count, increase throughput, and avoid expensive closed-model licensing.

  • Easy Integration

    Integrate through an OpenAI-compatible endpoint, swap models freely, and scale across clouds or hardware without redesigning your application stack.

Wan 2.2 I2V A14B
Want to self-host this model with our open source infrastructure?
Read How

🔥 Trending models

Similar models

Get started with Modular

  • Request a demo

    Schedule a demo of Modular and explore a custom end-to-end deployment built around your models, hardware, and performance goals.

    • Distributed, large-scale online inference endpoints

    • Highest-performance to maximize ROI and latency

    • Deploy in Modular cloud or your cloud

    • View all features with a custom demo

    Book a demo

    Talk with our sales lead Jay!

    30min demo.  Evaluate with your workloads.  Ask us anything.

  • Talk to us!

    Book a demo for a personalized walkthrough of Modular in your environment. Learn how teams use it to simplify systems and tune performance at scale.

    • Custom 30 min walkthrough of our platform

    • Cover specific model or deployment needs

    • Flexible pricing to fit your specific needs

    Book a demo

    Talk with our sales lead Jay!

  • Start using MAX

    ( FREE )

    Run any open source model in 5 minutes, then benchmark it. Scale it to millions yourself (for free!).

  • Start using Mojo

    ( FREE )

    Install Mojo and get up and running in minutes. A simple install, familiar tooling, and clear docs make it easy to start writing code immediately.