🦙 llama.cpp

Run LLMs locally.

Alternatives