Text to video, image and audio in Blender Video Sequence Editor using Modelscope, Zeroscope, Animov, Potat1, Stable Diffusion, Deep Floyd IF, AudioLDM and Bark.
What is the tin2tin/Generative_AI GitHub project? Description: "Text to video, image and audio in Blender Video Sequence Editor using Modelscope, Zeroscope, Animov, Potat1, Stable Diffusion, Deep Floyd IF, AudioLDM and Bark.". Written in Python. Explain what it does, its main use cases, key features, and who would benefit from using it.
Question is copied to clipboard — paste it after the AI opens.
Clone via HTTPS
Clone via SSH
Download ZIP
Download master.zipReport bugs or request features on the Generative_AI issue tracker:
Open GitHub Issues