Timber – Ollama for classical ML models, 336x faster than Python
Summary
Timber is an Ollama-style AOT compiler that turns classic ML models (XGBoost, LightGBM, scikit-learn, CatBoost, ONNX) into native C99 inference code with no Python runtime in the hot path, delivering microsecond latency. It provides a simple load/serve workflow and benchmarks claiming up to 336x faster inference than Python, targeting edge, embedded, and regulated use cases.