Running Ollama on Constrained Hardware
Context
I’ve been curious about running local LLMs for development tasks — code review, summarisation, drafting — without relying on cloud APIs. Ollama makes this straightforward to set up, but most benchmarks assume beefy hardware. I wanted to know: what’s the experience like on a standard development machine?
My setup: a laptop with a 12th-gen Intel i7, 16GB RAM, integrated graphics. No discrete GPU. This is the kind of machine most developers actually use.