Instructions to use stabilityai/stable-video-diffusion-img2vid with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use stabilityai/stable-video-diffusion-img2vid with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image, export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-video-diffusion-img2vid", dtype=torch.bfloat16, device_map="cuda") pipe.to("cuda") prompt = "A man with short gray hair plays a red electric guitar." image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png" ) output = pipe(image=image, prompt=prompt).frames[0] export_to_video(output, "output.mp4") - Notebooks
- Google Colab
- Kaggle
Add ' ' tag
#35
by Someshfengde - opened
README.md
CHANGED
|
@@ -3,6 +3,14 @@ pipeline_tag: image-to-video
|
|
| 3 |
license: other
|
| 4 |
license_name: stable-video-diffusion-community
|
| 5 |
license_link: LICENSE.md
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
---
|
| 7 |
|
| 8 |
# Stable Video Diffusion Image-to-Video Model Card
|
|
|
|
| 3 |
license: other
|
| 4 |
license_name: stable-video-diffusion-community
|
| 5 |
license_link: LICENSE.md
|
| 6 |
+
tags:
|
| 7 |
+
- diffusers
|
| 8 |
+
- safetensors
|
| 9 |
+
- image-to-video
|
| 10 |
+
- license:other
|
| 11 |
+
- diffusers:StableVideoDiffusionPipeline
|
| 12 |
+
- region:us
|
| 13 |
+
- ' '
|
| 14 |
---
|
| 15 |
|
| 16 |
# Stable Video Diffusion Image-to-Video Model Card
|