Run Qwen3.5-35B MoE model on RTX 5090 with vLLM using NVFP4 quantization for fast, efficient text generation and extended context length support.
What is the miguefuentes1985/vllm-qwen3.5-nvfp4-5090 GitHub project? Description: "Run Qwen3.5-35B MoE model on RTX 5090 with vLLM using NVFP4 quantization for fast, efficient text generation and extended context length support.". Written in Jinja. Explain what it does, its main use cases, key features, and who would benefit from using it.
Question is copied to clipboard — paste it after the AI opens.
Clone via HTTPS
Clone via SSH
Download ZIP
Download master.zipReport bugs or request features on the vllm-qwen3.5-nvfp4-5090 issue tracker:
Open GitHub Issues