MLX on Mac
Twoody Mac installs the MLX stack, downloads weights and exposes a compatible local server.
Private LLM lets Twoody Server route requests to a model you control: MLX on Mac, Ollama, llama.cpp, vLLM, TGI or an explicitly configured cloud provider.
Twoody Mac installs the MLX stack, downloads weights and exposes a compatible local server.
OpenAI-compatible providers let you switch runtimes without rewriting the product experience.
Fast, coding, reasoning, long documents: the right model depends on the task.
Twoody knows connected machines and their capabilities.
The user starts model download from the app.
The model becomes the active provider for the chosen mode.
RAM, latency and tok/s show whether the machine keeps up.
No. Private LLM foregrounds local mode, but Twoody Server can also route to an explicitly configured cloud provider.
The user or admin depending on context. The website should show remote install and selection.