2 Comments

User's avatar
Popular AI's avatar

The Mac mini keeps getting more interesting for local LLMs because quiet hardware, unified memory, Ollama, and MLX make it a very clean private AI box. For people actually using one, is the regular M4 enough for real daily work, or does the M4 Pro with 64GB end up being the version that makes sense if local AI is the main reason to buy?

Pawel Jozefiak's avatar

Memory is permanent is the line that matters most and it gets buried in most Mac mini guides. Ran a base M4 before moving to a Pro and the ceiling shows up fast once you try running a 32B model alongside anything else.

The Ollama vs MLX choice isn't permanent but it changes your workflow significantly. MLX is faster on Apple silicon for inference but the ecosystem tooling is smaller. External SSD is underrated for storage - you'll fill internal NVMe faster than expected. Running mine as a dedicated agent machine 24/7 for two months, the stability on the Pro is noticeably better.

No posts

Ready for more?