Go with your own intelligence - Go applications that directly integrate llama.cpp for local inference using hardware acceleration.
What is the hybridgroup/yzma GitHub project? Description: "Go with your own intelligence - Go applications that directly integrate llama.cpp for local inference using hardware acceleration.". Written in Go. Explain what it does, its main use cases, key features, and who would benefit from using it.
Question is copied to clipboard — paste it after the AI opens.
Clone via HTTPS
Clone via SSH
Download ZIP
Download master.zipReport bugs or request features on the yzma issue tracker:
Open GitHub Issues