Getting Started with Happy Horse
From zero to your first AI-generated video.
Note:As of April 8, 2026, Happy Horse weights and inference code are listed as “Coming Soon.” This guide is based on published documentation and will be updated once the full release is available.
Prerequisites
- 1NVIDIA GPU with ≥48GB VRAM
H100 recommended, A100 supported. Consumer GPUs (RTX 4090 with 24GB) are insufficient for the full model.
- 2Python 3.10+
Required for the inference code and dependencies.
- 3CUDA 12.0+ and PyTorch 2.x
For GPU acceleration and model inference.
- 4~30GB disk space
For model weights (base + distilled + super-resolution).
Installation
# Clone the repository
git clone https://github.com/happy-horse/happyhorse-1.git
cd happyhorse-1
# Install dependencies
pip install -r requirements.txt
# Download model weights
bash download_weights.shYour First Generation
Option 1: Command Line
python demo_generate.py \
--prompt "a robot dancing on the moon" \
--duration 5Option 2: Python API
from happyhorse import HappyHorseModel
model = HappyHorseModel.from_pretrained("happy-horse/happyhorse-1.0")
video, audio = model.generate(
prompt="an elder on a mountain peak overlooking the valley",
duration_seconds=5,
fps=24,
language="en",
)
video.save("output.mp4")
audio.save("output.wav")What to Expect
| Setting | Generation Time | Output |
|---|---|---|
| Default (256p) | ~2s | Fast preview, lower resolution |
| With super-res (540p) | ~8s | Good balance of speed and quality |
| Full quality (1080p) | ~38s | Cinema-grade output |
Tips for Best Results
- • Start with single-character portrait prompts — this is where Happy Horse excels
- • Keep prompts descriptive but concise — include subject, action, setting, and style
- • Use the 256p mode for rapid iteration, then upscale your best results
- • For lip-sync, specify the language parameter matching your prompt language
- • Avoid complex multi-character scenes until the community develops optimization techniques