jundot/omlx
Summary
The repository README for oMLX presents an LLM inference server optimized for Apple Silicon, featuring continuous batching and a tiered KV cache. It details installation, quickstart steps, features such as multi-model serving, a web admin panel, API compatibility, and architecture.