tools
updated every night
HY OmniWeaving
last releaseApril 11, 2026
powered byHunyuanVideo-1.5 / OmniWeaving
goblin vibe check:
tencent's research model that's impressively capable but you'll need serious compute and patience to run it locally
tencent's unified omni-level video generation model built on hunyuanvideo-1.5 for multimodal composition, editing, and reasoning-informed video tasks.
Unified multimodal generation across text, images, and video clipsSupports T2V, I2V, V2V, editing, interpolation, and keyframe-conditioned generationReasoning-informed prompt understanding through an LLM componentIntroduces IntelligentVBench for multimodal composition and abstract reasoning
key features
Unified multimodal generation across text, images, and video clipsSupports T2V, I2V, V2V, editing, interpolation, and keyframe-conditioned generationReasoning-informed prompt understanding through an LLM componentIntroduces IntelligentVBench for multimodal composition and abstract reasoning
spec & usage
Built on top of HunyuanVideo-1.5 as an omni-level video generation backbone
Uses a heavy stack with transformer, VAE, text encoder, and upsampler components for local deployment
Open-source release with code and weights published by Tencent on April 3, 2026
Community work is already bringing it into ComfyUI-style local workflows
limitations
Large local footprint makes serious inference expensive on consumer hardware
Unified generation flexibility comes with a more complex install path than lightweight video tools
scope:
visualvideoresearchlocalopen-sourcefreemultimodal