local-llm-stack

local-llm-stack

Dev-next-gen

Production-grade local LLM deployment stack — llama.cpp, Ollama, GGUF/GGML, ROCm AMD, 14B to 80B models

2 Stars
0 Forks
2 Watchers
mit License
53.9 SrcLog Score
Cost to Build
$500
Market Value
$500

Growth over time

3 data points  ·  2026-04-10 → 2026-04-25
Stars Forks Watchers
💬

How do you feel about this project?

Ask AI about local-llm-stack

Question copied to clipboard

What is the Dev-next-gen/local-llm-stack GitHub project? Description: "Production-grade local LLM deployment stack — llama.cpp, Ollama, GGUF/GGML, ROCm AMD, 14B to 80B models". Explain what it does, its main use cases, key features, and who would benefit from using it.

Question is copied to clipboard — paste it after the AI opens.

How to clone local-llm-stack

Clone via HTTPS

git clone https://github.com/Dev-next-gen/local-llm-stack.git

Clone via SSH

[email protected]:Dev-next-gen/local-llm-stack.git

Download ZIP

Download master.zip

Found an issue?

Report bugs or request features on the local-llm-stack issue tracker:

Open GitHub Issues