Canvas
Video Node
The Video node converts a generated image into a short-form video clip using image-to-video AI models. Perfect for creating ad content, social media reels, and animated product showcases.
How to Use
- Add a Video node from the Nodes panel.
- Connect an Image node's output to the Video node's input.
- Choose a video model from the dropdown.
- Optionally add a motion prompt describing the desired movement.
- Click Generate.
Fields
- Model - Select from available video models (Kling, Veo, Hailuo, Wan).
- Prompt - Describe the motion or action for the video (e.g. "character turns and smiles at the camera").
Connections
A Video node accepts input from:
- Image Node - the generated image becomes the first frame of the video.
Available Models
- Kling 3.0 v3 Pro - Character-aware video with @Element tags
- Kling O3 - Kuaishou's newest flagship, 3-15s with native audio
- Veo 3 - Google's premium video with native audio
- Sora 2 - OpenAI's exceptional video generation
Motion Transfer Models
Connect a reference video to re-render your AI character with the motion from that video.
- Kling 3.0 v3 Pro Motion - Best quality, up to 30s, supports @Element facial consistency
- Kling 3.0 v3 Std Motion - Fast v3 motion transfer, up to 30s
- DreamActor V2 - Multi-character and non-human motion transfer, up to 30s
Tips
- Generate the best possible image first, then animate it - video quality depends heavily on the source image.
- Keep motion prompts simple: "character smiles", "slow zoom in", "product rotation".
- Video generation takes longer than images (30-120 seconds depending on the model).
Was this helpful?